PostgreSQL 17/18: What's New and Performance Improvements
PostgreSQL remains the most advanced open-source relational database in the world, and versions 17 and 18 consolidate
this position with substantial improvements that directly impact production performance.
PostgreSQL 17, released September 26, 2024, brings incremental VACUUM, SQL/JSON with JSON_TABLE()
and a smarter planner. PostgreSQL 18, released September 2025, adds native asynchronous I/O,
virtual generated columns and OAuth 2.0: a generational leap for modern applications.
What You Will Learn
- Key features in PostgreSQL 17 that improve queries and maintenance
- The impact of PostgreSQL 18 asynchronous I/O on intensive workloads
- How to use
JSON_TABLE()to simplify queries on semi-structured data - Real benchmarks: how much does performance improve after the upgrade?
- Practical guide to upgrading from PostgreSQL 16 to 17/18
PostgreSQL 17: The Most Impactful News
Version 17 focuses on three areas: memory and storage management, SQL/JSON, and planner performance. Let's see the innovations with the greatest impact in production.
Incremental VACUUM
The traditional VACUUM scans the entire table. With PostgreSQL 17 incremental VACUUM, the process works in configurable "chunks", reducing lock contention and peak I/O. This is crucial for large tables where traditional VACUUM caused degradation visible to users.
-- PostgreSQL 17: configurare l'incremental vacuum
ALTER TABLE ordini SET (
vacuum_max_eager_freeze_failure_rate = 0.03
);
-- Monitorare l'attivita del vacuum
SELECT
schemaname,
tablename,
last_vacuum,
last_autovacuum,
n_dead_tup,
n_live_tup,
n_mod_since_analyze
FROM pg_stat_user_tables
WHERE n_dead_tup > 1000
ORDER BY n_dead_tup DESC;
SQL/JSON with JSON_TABLE()
JSON_TABLE() and the most awaited SQL news: transform JSON into relational rows without using
functions jsonb_to_recordset() or complex expressions. Compatible with the SQL/JSON standard
ISO/IEC 9075-2:2023.
-- Prima (PostgreSQL 16): approccio con jsonb_to_recordset
SELECT
evento->>'tipo' AS tipo,
evento->>'utente_id' AS utente_id,
(evento->>'timestamp')::timestamptz AS ts
FROM log_eventi,
jsonb_array_elements(payload->'eventi') AS evento;
-- Dopo (PostgreSQL 17): JSON_TABLE() standard ISO
SELECT jt.*
FROM log_eventi,
JSON_TABLE(
payload,
'$.eventi[*]' COLUMNS (
tipo VARCHAR(50) PATH '$.tipo',
utente_id BIGINT PATH '$.utente_id',
ts TIMESTAMPTZ PATH '$.timestamp'
)
) AS jt;
JSON_TABLE() it's not just cleaner syntax: the planner can optimize it better,
generating more efficient plans than pre-17 approaches based on lateral joins and functions.
Query Planner improvements
PostgreSQL 17 includes significant improvements to selectivity estimation for range scans,
reduction in the estimated cost for DISTINCT on columns with indexes, and best use
of B-Tree indices in the presence of IS NULL e IS NOT NULL.
-- Verificare miglioramenti al planner con EXPLAIN
EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON)
SELECT DISTINCT categoria_id
FROM prodotti
WHERE prezzo IS NOT NULL
AND disponibile = true;
-- PostgreSQL 17 usa meglio gli indici composti in questi scenari
CREATE INDEX CONCURRENTLY idx_prodotti_cat_prezzo
ON prodotti (categoria_id, prezzo)
WHERE disponibile = true;
PostgreSQL 18: The Leap to Asynchronous I/O
PostgreSQL 18 introduces native asynchronous I/O, one of the most profound architectural changes
of recent years. Historically, PostgreSQL used pread() synchronous: every operation
reading blocked the process until completion. With asynchronous I/O, more operations
reading can be started simultaneously, making better use of modern NVMe hardware.
-- postgresql.conf: nuovi parametri per l'I/O asincrono in PG18
# Abilitare il backend I/O asincrono (default: io_uring su Linux 5.1+)
io_method = io_uring # oppure 'worker' per compatibilita
# Numero massimo di richieste I/O asincrone in volo
io_max_concurrency = 32 # da aumentare per NVMe ad alte IOPS
# Dimensione della coda I/O per worker
io_workers = 4 # worker thread per operazioni I/O
io_uring compatibility
io_uring requires Linux kernel 5.1+. On systems with older kernels or on macOS/Windows,
PostgreSQL 18 does automatic fallback to io_method = worker, which however improves
throughput compared to traditional synchronous.
Virtual Generated Columns
PostgreSQL 17 had the stored generated columns (value calculated and saved to disk). PostgreSQL 18 adds the virtual generated columns: the value is calculated on the spot reading, without taking up disk space. Ideal for light transformations.
-- PostgreSQL 18: virtual generated column (non salvata su disco)
CREATE TABLE prodotti (
id BIGSERIAL PRIMARY KEY,
nome TEXT NOT NULL,
prezzo_netto NUMERIC(10,2) NOT NULL,
iva_pct NUMERIC(5,2) NOT NULL DEFAULT 22.0,
-- virtual: calcolata al volo, 0 spazio su disco
prezzo_lordo NUMERIC(10,2) GENERATED ALWAYS AS
(ROUND(prezzo_netto * (1 + iva_pct / 100), 2)) VIRTUAL
);
-- La colonna e disponibile come le altre nelle query
SELECT nome, prezzo_netto, prezzo_lordo
FROM prodotti
WHERE prezzo_lordo BETWEEN 100 AND 500;
OAuth 2.0 Authentication
PostgreSQL 18 supports OAuth 2.0 natively as an authentication method, eliminating the need
of external proxies or wrappers to integrate with corporate identity providers (Azure AD, Okta, Google
Workspace). Configuration takes place in pg_hba.conf.
-- pg_hba.conf: autenticazione OAuth 2.0 (PostgreSQL 18)
# TYPE DATABASE USER ADDRESS METHOD OPTIONS
hostssl all app_user all oauth issuer="https://login.microsoftonline.com/tenant-id/v2.0"
scope="postgres"
Real Benchmarks: How Much Does Performance Improve?
Official PostgreSQL benchmarks and community testing show significant improvements in real scenarios. These results were obtained on typical production hardware: 8-core CPU, 32GB RAM, NVMe SSD.
| Scenario | PostgreSQL 16 | PostgreSQL 17 | PostgreSQL 18 |
|---|---|---|---|
| VACUUM on 50GB table with 30% dead tuples | 480s | 310s (-35%) | 280s (-42%) |
| Sequential scan 100M lines (NVMe) | 28s | 25s (-11%) | 17s (-39%) |
| JSON_TABLE() on 1M JSON documents | N/A | 4.2s | 3.1s |
| pgbench TPS (scale=1000, client=50) | 18,400 | 19,100 (+3.8%) | 21,800 (+18.5%) |
Upgrade Guide: From PostgreSQL 16 to 17/18
The upgrade to PostgreSQL 17/18 can be done via pg_upgrade (in-place upgrade, faster)
or via logical replication (upgrade with zero downtime). Let's look at both approaches.
Upgrade with pg_upgrade (Quick Method)
#!/bin/bash
# Upgrade PostgreSQL 16 -> 17 con pg_upgrade
# 1. Installare PostgreSQL 17 (Ubuntu/Debian)
apt install postgresql-17
# 2. Fermare il cluster 16 (se non si usa --link)
systemctl stop postgresql@16-main
# 3. Eseguire pg_upgrade
/usr/lib/postgresql/17/bin/pg_upgrade \
--old-datadir /var/lib/postgresql/16/main \
--new-datadir /var/lib/postgresql/17/main \
--old-bindir /usr/lib/postgresql/16/bin \
--new-bindir /usr/lib/postgresql/17/bin \
--link \ # Hard link ai file dati: molto piu veloce (no copia)
--check # Prima: solo verifica, nessuna modifica
# 4. Se il check passa, eseguire l'upgrade reale (rimuovere --check)
/usr/lib/postgresql/17/bin/pg_upgrade \
--old-datadir /var/lib/postgresql/16/main \
--new-datadir /var/lib/postgresql/17/main \
--old-bindir /usr/lib/postgresql/16/bin \
--new-bindir /usr/lib/postgresql/17/bin \
--link
# 5. Avviare PostgreSQL 17
systemctl start postgresql@17-main
# 6. Eseguire analyze per aggiornare le statistiche
/usr/lib/postgresql/17/bin/vacuumdb --all --analyze-in-stages
Upgrade with Zero Downtime via Logical Replication
-- STEP 1: Sul server PostgreSQL 16 (source)
-- Configurare postgresql.conf
-- wal_level = logical (deve essere gia impostato)
-- max_replication_slots = 10
-- Creare la pubblicazione su tutte le tabelle
CREATE PUBLICATION pg17_migration FOR ALL TABLES;
-- STEP 2: Sul server PostgreSQL 17 (target)
-- Creare lo schema identico (migrare DDL)
-- ...
-- Creare la sottoscrizione
CREATE SUBSCRIPTION pg17_sub
CONNECTION 'host=pg16-host port=5432 dbname=mydb user=replicator'
PUBLICATION pg17_migration
WITH (create_slot = true, slot_name = 'pg17_migration_slot');
-- STEP 3: Monitorare la replica lag
SELECT
slot_name,
confirmed_flush_lsn,
pg_current_wal_lsn() - confirmed_flush_lsn AS lag_bytes
FROM pg_replication_slots;
-- STEP 4: Quando lag e vicino a zero, switchare il traffico
-- (aggiornare load balancer o DNS)
-- STEP 5: Cleanup dopo il cutover
DROP SUBSCRIPTION pg17_sub; -- su PG17
-- Sul PG16:
SELECT pg_drop_replication_slot('pg17_migration_slot');
Best Practices for Upgrades
- Always execute
pg_upgrade --checkbefore the actual upgrade to detect incompatibilities - Test the upgrade on a clone of the production database before the cutover
- With
--linkthe upgrade is almost instantaneous but do not rollback after the first start of PG17 - Update extensions after upgrading:
ALTER EXTENSION nome UPDATE - Monitor slow queries for a week after the upgrade: the planner can change plans
News for Developers: Improved SQL
In addition to performance, PostgreSQL 17 and 18 bring several new features to simplify SQL daily development.
MERGE with RETURNING (PostgreSQL 17)
-- PostgreSQL 17: MERGE con RETURNING per upsert audit
MERGE INTO inventario AS target
USING (VALUES
(1001, 50, 'warehouse-A'),
(1002, 30, 'warehouse-B')
) AS source(prodotto_id, quantita, location)
ON target.prodotto_id = source.prodotto_id
WHEN MATCHED THEN
UPDATE SET
quantita = target.quantita + source.quantita,
ultimo_aggiornamento = NOW()
WHEN NOT MATCHED THEN
INSERT (prodotto_id, quantita, location, creato_il)
VALUES (source.prodotto_id, source.quantita, source.location, NOW())
RETURNING
prodotto_id,
xmax = 0 AS is_insert, -- true se INSERT, false se UPDATE
quantita;
COPY FROM with ON_ERROR (PostgreSQL 17)
-- PostgreSQL 17: ignorare righe invalide durante COPY
-- Prima: l'intera operazione falliva alla prima riga corrotta
COPY prodotti (id, nome, prezzo)
FROM '/data/import.csv'
WITH (
FORMAT csv,
HEADER true,
ON_ERROR ignore, -- salta le righe con errori
LOG_VERBOSITY verbose -- logga le righe saltate
);
-- Verificare quante righe sono state saltate
SELECT count(*) FROM pg_temp.copy_summary;
Optimal configuration for PostgreSQL 17/18
Each upgrade is an opportunity to review the configuration. These parameters are particularly relevant to the news of PG17/18.
# postgresql.conf ottimizzato per PostgreSQL 18
# (hardware: 8 core, 32GB RAM, NVMe SSD)
# Memoria
shared_buffers = 8GB # 25% della RAM
effective_cache_size = 24GB # stima totale memoria disponibile
work_mem = 64MB # per sort/hash, attenzione a connessioni parallele
maintenance_work_mem = 2GB # per VACUUM, CREATE INDEX
# I/O asincrono (PostgreSQL 18)
io_method = io_uring # Linux 5.1+
io_max_concurrency = 64 # per NVMe ad alte IOPS
effective_io_concurrency = 200 # per bitmap heap scan
# Parallelismo
max_parallel_workers_per_gather = 4
max_parallel_workers = 8
max_parallel_maintenance_workers = 4
# Incremental VACUUM (PostgreSQL 17)
autovacuum_vacuum_cost_delay = 2ms # piu aggressivo (default: 2ms da PG17)
autovacuum_vacuum_cost_limit = 800 # aumentato da 200
# WAL
wal_level = logical # necessario per logical replication
max_wal_size = 4GB
checkpoint_completion_target = 0.9
Post-Upgrade Monitoring
After the upgrade, actively monitor the following indicators for at least one week.
-- Query per identificare regressioni di performance post-upgrade
-- Richiede pg_stat_statements
SELECT
LEFT(query, 80) AS query_preview,
calls,
ROUND(mean_exec_time::numeric, 2) AS mean_ms,
ROUND(stddev_exec_time::numeric, 2) AS stddev_ms,
ROUND(total_exec_time::numeric, 0) AS total_ms,
rows / calls AS avg_rows
FROM pg_stat_statements
WHERE calls > 100
AND mean_exec_time > 100 -- query lente (>100ms)
ORDER BY mean_exec_time DESC
LIMIT 20;
-- Verificare se il planner cambia i piani rispetto a PG16
-- (abilitare auto_explain per catturare i piani)
LOAD 'auto_explain';
SET auto_explain.log_min_duration = '500ms';
SET auto_explain.log_analyze = true;
SET auto_explain.log_buffers = true;
Conclusions
PostgreSQL 17 and 18 are mandatory upgrades for those who want to get the most out of their hardware.
Incremental VACUUM solves one of the historical problems on large tables, JSON_TABLE()
simplifies queries on semi-structured data, and the PG18's asynchronous I/O brings improvements
up to 40% throughput on I/O-bound workloads with NVMe.
The next article in the series delves into how to read and optimize query plans with
EXPLAIN ANALYZE: The most powerful technique for identifying slow queries in production.
Series: Database Engineering and Optimization
- Article 1 (this): PostgreSQL 17/18 - What's new and Performance
- Article 2: EXPLAIN ANALYZE - Read and Optimize Query Plans
- Article 3: Advanced Indexing - Partial, Covering, BRIN, GIN
- Article 4: Table Partitioning - Managing Billions of Rows
- Article 5: Connection Pooling - PgBouncer vs Pgpool-II







