PostgreSQL follows an annual release cadence for major versions, typically shipping in September or October each year. Each major version is supported for five years from its initial release date, after which it reaches end-of-life and no longer receives security or bug fixes. Minor releases (patch versions) arrive quarterly and require no dump/restore - just stop the server, swap the binaries, and restart. Major upgrades are a different story: they require either a pg_upgrade run or a full dump and reload, which makes the cost of skipping versions non-trivial on large databases.
The practical consequence of this cadence is that teams often find themselves two or three major versions behind, then face a larger migration gap. Understanding what each version actually delivers helps prioritize that work - and decide whether the new capabilities justify the upgrade effort now versus waiting.
PostgreSQL 16: Replication Flexibility and I/O Observability
Released September 2023, PostgreSQL 16 addressed two long-standing gaps: visibility into I/O behavior and the constraints around logical replication topology.
The new pg_stat_io system view is one of the most operationally useful additions in years. It exposes I/O statistics broken down by backend type, object (relation, index, WAL, etc.), and I/O context (normal, bulkread, vacuum, etc.). Before this, diagnosing whether a performance problem was due to poor buffer cache hit rates or actual disk throughput required reaching for external tools or OS-level instrumentation. Now you can query the database directly:
SELECT backend_type, object, context, reads, writes, hits
FROM pg_stat_io
WHERE object = 'relation'
ORDER BY reads DESC;
On the replication side, PG 16 introduced logical decoding from standby servers. Previously, logical replication slots had to live on the primary, which meant all decoding overhead landed there. Offloading this to a standby is relevant for high-write primaries where decoding work was measurable. The release also added parallel apply for large transactions on subscribers - controlled via max_parallel_apply_workers_per_subscription - which addresses a common throughput bottleneck where a single large transaction would serialize subscriber apply workers. Bidirectional logical replication support also began taking shape, allowing data to flow between two nodes without loops, using origin filtering via CREATE SUBSCRIPTION ... ORIGIN.
Parallelism and SQL/JSON
The query planner gained the ability to parallelize FULL and right OUTER hash joins, and aggregate functions like string_agg() and array_agg() can now run in parallel. These are not exotic operations - they appear in many ETL and reporting queries. The planner also extended incremental sort to DISTINCT queries, which reduces sort overhead when the leading sort key is already ordered.
SQL/JSON got a set of standard-compliant constructors: JSON_ARRAY(), JSON_ARRAYAGG(), JSON_OBJECT(), JSON_OBJECTAGG(), and the IS JSON predicate. These bring PostgreSQL closer to the ISO SQL:2016 JSON standard, filling in gaps that previously required wrapping calls to jsonb_build_object() or to_json().
PostgreSQL 17: Vacuum, Incremental Backup, and JSON_TABLE
PostgreSQL 17 shipped September 26, 2024 and brought structural improvements to some of the most complaint-prone areas: VACUUM memory consumption, backup size management, and JSON query capabilities.
The VACUUM memory overhaul is arguably the highest-impact change for busy databases. The internal data structure VACUUM uses to track dead tuples was redesigned, reducing memory consumption by up to 20x in some scenarios. The practical effect is that autovacuum_work_mem (or maintenance_work_mem) can be kept lower while still processing large tables effectively, and VACUUM is less likely to fail to reclaim dead space due to running out of tracking capacity. Combined with vacuum progress reporting for indexes (visible in pg_stat_progress_vacuum), it is now much easier to understand why a vacuum pass is slow.
Incremental backup support via pg_basebackup is new in PG 17. Rather than always copying the full data directory, incremental backups capture only pages modified since the last base or incremental backup. The pg_combinebackup utility reconstructs a usable full backup from a base plus a chain of incrementals. For databases in the hundreds of gigabytes or larger, this changes the economics of backup frequency considerably - taking a nightly incremental instead of a full backup reduces both storage costs and backup window duration.
Logical Replication Lifecycle Improvements
One of the more operationally painful aspects of major version upgrades was that logical replication slots were dropped during pg_upgrade, requiring manual reconstruction of subscriptions on the downstream side. PG 17 preserves logical replication slots through the upgrade process, but this only applies when upgrading from PostgreSQL 17 to a later version. Upgrades from PG 16 or earlier to PG 17 still drop slots and require manual reconstruction of subscriptions on the downstream side. The new pg_createsubscriber utility converts an existing physical standby into a logical replica, which simplifies certain migration and HA topology changes. Failover control for logical replication was also introduced, allowing subscribers to follow a promoted standby automatically.
MERGE and JSON_TABLE
The MERGE statement from PG 15 became more complete in PG 17 with the addition of a RETURNING clause and the ability to target updatable views. JSON_TABLE() - a SQL standard function for shredding JSON into relational rows - landed as well:
SELECT jt.*
FROM orders,
JSON_TABLE(orders.line_items, '$[*]'
COLUMNS (
product_id int PATH '$.product_id',
quantity int PATH '$.quantity',
unit_price numeric PATH '$.price'
)
) AS jt;
This is useful when JSON columns contain arrays that need to be unnested for aggregation or joining. Prior to JSON_TABLE(), the equivalent query required jsonb_array_elements() plus multiple lateral expressions.
WAL write throughput under high concurrency improved by up to 2x, and COPY export of large rows got roughly a 2x speedup as well. The sslnegotiation=direct libpq connection option avoids the round-trip upgrade negotiation and performs a direct TLS handshake, which matters for connection-heavy workloads like connection poolers doing frequent reconnects.
PostgreSQL 18: Async I/O and UUIDv7
PostgreSQL 18 was released September 2025. The headline change is the introduction of a native asynchronous I/O (AIO) subsystem, controlled by the io_method server parameter. The available options are worker (async I/O via background workers, the default), sync (synchronous I/O, available to replicate pre-PG18 behavior), and io_uring on supported Linux kernels. Benchmarks have shown sequential scan throughput improvements of up to 3x in I/O-bound scenarios with io_uring. The new pg_aios system view exposes the file handles currently in use for async operations.
The optimizer gained skip scan support for multicolumn B-tree indexes. This allows the planner to use an index like (tenant_id, created_at) for queries that filter only on created_at, by iterating over distinct tenant_id values rather than falling back to a sequential scan. It is not a replacement for properly designed indexes, but it closes a gap that required redundant single-column indexes in multi-tenant schemas. Self-join elimination was also added - the planner can now detect and remove unnecessary joins of a table to itself, which sometimes appear in ORM-generated queries.
Developer-Facing Changes
uuidv7() is now a built-in function, generating timestamp-ordered UUIDs (as defined in RFC 9562). Compared to gen_random_uuid() (UUID v4), v7 UUIDs sort chronologically, which substantially reduces B-tree index fragmentation for insert-heavy workloads using UUIDs as primary keys. This matters for tables with tens of millions of rows where random UUID inserts cause write amplification from page splits.
Virtual generated columns landed as the new default for generated columns. Unlike stored generated columns (which write the computed value to disk), virtual columns compute their value at read time. This avoids the storage overhead and write amplification of stored columns for cases where the computation is cheap. The RETURNING clause in INSERT, UPDATE, DELETE, and MERGE now supports OLD and NEW qualifiers:
UPDATE accounts
SET balance = balance - 100
WHERE id = 42
RETURNING OLD.balance AS before, NEW.balance AS after;
Data checksums are now enabled by default for new clusters created with initdb. Previously this was opt-in, meaning many production clusters ran without checksum protection. Clusters initialized without checksums cannot simply enable them later without downtime (pg_checksums requires the server to be stopped). Teams upgrading from older versions are not automatically affected, but new deployments will get corruption detection out of the box.
OAuth authentication support was introduced, allowing PostgreSQL to authenticate clients via an OAuth 2.0 token. This requires --with-libcurl at compile time and an oauth_validator_libraries configuration. It is most relevant for environments that already use OAuth-based SSO and want to reduce the number of separate credential stores. Parallel GIN index builds also arrived in PG 18, directly relevant for tables with jsonb columns or full-text search vectors where GIN index builds previously had to run single-threaded and could take hours on large datasets.
Upgrade Planning Considerations
Running PG 14 or 15? PG 14 reaches end-of-life in November 2026. PG 13 is already past its support window. The gap between PG 14 and PG 17 or 18 is large enough that the upgrade is worth planning deliberately rather than treating it as a routine patch cycle.
pg_upgrade handles the binary upgrade path and is generally reliable, but it requires schema compatibility and will not handle extension version mismatches automatically. Test the upgrade against a restored production backup in a staging environment before touching production, and validate that any custom extensions (especially ones with C code or custom operators) are compatible with the target version. Logical replication topologies need extra attention - PG 17's slot preservation during upgrades removes one manual step, but subscriber configurations still need verification post-upgrade.