Postgresql Performance


Published on October 01, 2025 by Stefanie Janine Stölting

PostgreSQL postgres performance settings configuration pgsql_tweaks

8 min READ

Table of Contents


PostgreSQL Performance

A lot of settings and also the database design do have an impact of the performance of a PostgreSQL database.

Indexes

Foreign Keys

PostgreSQL does not create an index when a foreign key is created. This might seem as a downside, but it is not. It gives the database designers the opportunity create that index with a different index type than the referenced table is using.

An example is on a big table a block range index (BRIN) will be faster and kept in memory compared to a B-Tree index.

My PostgreSQL extension pgsql_tweaks has a view for that: Documentation of pg_foreign_keys.

Forgotten Indexes

Check execution plans, use EXPLAIN ANALYSE to find them while developing. To find them in production use the missing indexes view from pgsql_tweaks.

There need to be some data in the database, otherwise PostgreSQL uses wrong assumptions.

Also install the auto_explain extension to get runtime execution plans for slow queries.

Too Many Indexes

Having too many indexes does slow down INSERT and UPDATE statements. Indexes have to written to disk.

Use multicolumn indexes, indexes on expressions, and/or partial indexes (yes, indexes can have a where condition) .

Conditional indexes are great for example when tables do contain records, that are marked in a column as deleted. When someone needs to access the deleted records, a query takes longer because not deleted can be a condition in an index. But the active records can be found much faster as the index is much smaller and might easier stay in memory and there aren’t that many pages to parse in a tree.

Unused Indexes

Regularly check for unused indexes especially on the production database.

pgsql_tweaks has a view for that: pg_unused_indexes.

Users and Connections

Too many users connecting to PostgreSQL is not something, PostgreSQL is good in.

That is due to how a connection is done. For every connection postgres process is forked into a new one.
This can end up in not being able to copy a new process and the database connection fails.

Also with a lot of connections each connection will only have a small amount of RAM available. That results into writing stuff to disk into the cache. Even while mostly the spinning rust is gone, this will slow down the answer.

Use a server side connection pooler like PgBouncer to increase the amount of connections. It does keep connections open and spares PostgreSQL to copy processes for every new connection.

Another problem might even end in a immediate server shutdown. When the configuration is not well calculated, PostgreSQL might run out of memory.

Splitting Data over Disks

On heavy writing PostgreSQL servers it is often usefull to have disks for certain purposes. This splitts the work of writing data.

  • One Disk for the OS
  • One Disk for the data
  • One Disk for the write ahead log (WAL)
  • One Disk for database logs

Just to have several partitions on the same disk does not speed anything up.

File Systems

Splitting data over disks and using ZFS doing snapshots is impossible, as the time will differ between the different disks.

This results in an inconsistent state for a RDBMS and PostgreSQL will fail to start as it has several check functions that take care, that the system is not in an undefined status.

Network

Insufficient network problems can be very problematic. And that is not only true for the client connection, but also for the replication of data.

This could result in followers falling behind of the primary. Which is resulting in wrong query results on a follower compared to the same query on the primary.

But there is also a problem, when the follower is not able to get the WAL from the primary.
This could result in a full disk and that will end with PostgreSQL shutting down.

Monitor the primary and the followers about the WAL status, use the view stats.replication.

Vacuum

PostgreSQL Vacuum is necessary to remove data from tables, that is not visible anymore. This is due to how Multi Version Concurrency Control (MVCC) is implemented.

Autocacuum takes care of removing the unvisible data by making the space availabe for reusage. This does not result in more free disk space. Except when pages that are located at the end of a table are vacuumed. These pages will be removed.
Statistically speaking, that can be ignored.

Therefore never turn autovacuum off.

Autovacuum Configuration

By default autovacuum is configured very conservative. The setting can be adjusted to be more aggressive, meaning to clean up less bloated tables and having more workers, espescially when there are lots of tables.

Parameters to tune:

The autovacuum_max_workers has to be adjusted to the other jobs running on the server. Meaning, that too many of these workers might slow down the server.

Table Level Configuration

On tables where lots of UPDATE and/or DELETE are executed against, autovacuum can be adjusted to run more often than what is set in the global configuration.

This results in shorter runtimes of autovacuum on this tables, therefore reducing the impact.

Monitoring Tables

To know bloat of tables, the ones with high inserts, updates, and/or deletes should be monitored.</br />

Don’t monitor all tables, that will move the focus away from what is neccessary to monitor.

There is a view to see the current status of tables, pg_table_bloat. This view is very expensive, therefore do not execute it too often. It is often usefull to filter tables to only those, that do need monitoring.

The downside with TimescaleDB here is, that all time partitions are independend tables for PostgreSQL, therefore you cannot not just monitor the main TimescaleDB table. In fact exclude them, as they do not return any goodinformation.

Adjusting For Index Only Scans

Index only scans are the fastest way to access data in joins, where conditions, and sorting.

-- Tuning autovacuum for index only scans
ALTER TABLE {table name}
SET (autovacuum_vacuum_scale_factor = 0.01)
;
Autovacuum High Inserting Into Tables

With having 100,000 transactions per day doing UPDATE or DELETE, tune it to run round about once a day.
Adjust autovacuum_freeze_max_age to the number of daily transactions on these tables.

ALTER TABLE {table name}
SET (autovacuum_freeze_max_age = 100000)
;

Transaction Wraparound

Transaction Wraparound is a serious problem that even might take server maintenance downtime if not addressed.

Anti-Wraparound vacuum has to be able to freeze tuples in all tables.
Blocking anti-wraparound can be coused by:

  • Very long running database sessions
  • Data corruption

For tables with lots of INSERT statements the problem is solved with Autovacuum High Inserting Into Tables.

Fillfactor

Fillfactor is availabe on tables and indexes.

PostgreSQL stores its data in pages. The fillfactor affects the size of the page that is filled in percent.

On Tables

The fillfactor increases the performance especially of UPDATE statements.

The rule of thumb is to have a fillfactor on tables between 70 % and 90%, depending on the amount of updates and the size of a row stored in a page.

On tables that get rows only inserted, a fillfactor of 100 % is the best setting.

On Indexes

The fillfactor affects indexes differently than tables. B-Tree indexes get new leaves on the same page, when possible, otherwise a page is split and the index is splitted over several pages.

Therefore the fillfactor is usefull for INSERT and UPDATE statements on indexes.

Prewarm Cache

The more data you have, the more memory your servers will have. Accessing the data in RAM still is magnitutes faster than grabbing them from SSDs, not to speak about rotating disks.

Every time you have to restart a PostgreSQL server, the caches are empty and will be filled again step by step. But that is not a greate experience for users as their requests run slow because their data has to be grabbed from disk.

There is an extension to solve this problem: pg_prewarm.

When a server is shut down, it takes a bit longer as the data in chaches will be written to disk. Also the start is slower as the caches will be filled with the content of the file on disks. But the user experience is the same as before.



Author: Stefanie Janine Stölting