- The codebase is old and huge, accruing some heavy technical debt, making it a less than ideal foundation for iterating quickly on a new paradigm like AI and vector databases.
- Some ancient design decisions have aged poorly, such as its one connection per process model, which is not as efficient as distributing async tasks over thread pools. If not mitigated through an external connection pooler you can easily have real production issues.
- Certain common use cases suffer from poor performance; for example, write amplification is a known issue. Many junior developers mistakenly believe they can simply update a timestamp or increment a field on a main table with numerous columns.
So, yes, PG is one of the best compromises available on the database market today. It's robust, offers good enough performance, and is feature-rich. However, I don't believe it can become the ONE database for all purposes.
Using a dedicated tool best suited for a specific use case still has its place; SQLite and DuckDB, for instance, are very different solutions with interesting trade-offs.
I believe that there are contributors currently working on a one thread per connection version of PostgreSQL. It's a huge amount of work so I wouldn't expect it to be released tomorrow.
Regarding wide updates, I believe that HOT updates already partially solve this problem.
Postgres handles updates as insert+delete, and its secondary indexes reference the physical location of the row, instead of the primary key. This means that whenever an update results in an insert to a different page, the index needs to be updated as well, even if the indexed column hasn't been modified.
Replication has a similar amplification issue. Historically postgres has favored physical replication over per-row logical replication, that means that replication needs to transfer every modified page, including modified indexes, instead of just the new value of the modified row. (I think logical replication support has improved over the last couple of years).
There is the OrioleDB project, which attempts to improve on the design flaws in postgres's storage engine, but it's definitely not production ready yet.
It’s worse than that, unfortunately – since the entire row has to be updated due to MVCC, if a single index is updated, then _all_ indexes are updated. If you have a wide table with a lot of indexes, each UPDATE is N*index writes. This becomes even worse with multiple updates if indexed columns aren’t k-sortable (like UUIDv4), since now the DB will probably have to jump around multiple pages despite the rows being logically sequential.
> not to mention its ElasticSearch grade full-text search capabilities.
I played with postgresql a while ago to implement search. It's not horrible. But it's nowhere near Elasticsearch in terms of its capabilities. It's adequate for implementing very narrow use cases where search ranking really doesn't matter much (i.e. your revenue is not really impacted by poor precision and recall metrics). If your revenue does depend on that (e.g. because people buy stuff that they find on your website), you should be a bit more careful about monitoring your search performance and using the right tools to improve performance.
But for everything else you only have a handful of tools to work with to tune things. And what little there is is hard to use and kind of clunky. Great if that really is all you need and you know what you are doing but if you've used Elasticsearch and know how to use it properly you'll find your self missing quite a few things. Maybe some of those things will get added over time but for now it simply does not give you a lot to work with.
That being said, if you go down that path the trigram support in postgres is actually quite useful for implementing simple search. I went for that after trying the very clunky tsvector support and finding it very underwhelming for even the simplest of use cases. Trigrams are easier to deal with in postgres and you can implement some half decent ranking with it. Great for searching across product ids, names, and other short strings.
also with pg_trgm[0] (mentioned by OP) and pgvector for semantic search you have a pretty powerful search toolkit. for example, combining them for Hybrid Search [1]
Still very limited and frankly all a bit low level primitives. Unless you are a search expert, you won't be able to do much productive with this stuff. If you are, it might fit a few use cases. But then, why limit yourself to just this stuff?
The point of that of course being that the target audience for this stuff is actually people that for whatever reason are a bit shy using the right tools for the right job here and are probably lacking a lot of expertise. The intersection of people with the expertise that would be happy with this narrow subset of functionality is just not a lot of people.
We're doing this because our main product (Dolt) is MySQL-compatible, but a lot of people prefer postgres. Like, they really strongly prefer postgres. When figuring out how to support them, we basically had three options:
1) Foreign data wrapper. This doesn't work well because you can't use non-native stored procedure calls, which are used heavily throughout our product (e.g. CALL DOLT_COMMIT('-m', 'changes'), CALL DOLT_BRANCH('newBranch')). We would have had to invent a new UX surface area for the product just to support Postgres.
2) Fork postgres, write our own storage layer and parser extensions, etc. Definitely doable, but it would mean porting our existing Go codebase to C, and not being able to share code with Dolt as development continues. Or else rewriting Dolt in C, throwing out the last 5 years of work. Or doing something very complicated and difficult to use a golang library from C code.
3) Emulation. Keep Dolt's Go codebase and query engine and build a Postgres layer on top of it to support the syntax, wire protocol, types, functions, etc.
Ultimately we went with the emulation approach as the least bad option, but it's an uphill climb to get to enough postgres support to be worth using. Our main effort right now is getting all of postgres's types working.
I can totally understand this. Your feature set isn’t compelling enough for me to switch to MySQL when, as the original article states, Postgres basically does everything under the sun, well enough that I can keep my stack simple.
But if it allows me to use my existing Postgres tools, drivers, code, and knowledge then I’d consider it.
Postgres is simply the best. One thing I would like however is the ability to have control over the query planner for specific tasks. There is a dark art to influencing the query planner, but essentially it is unpredictable, and postgres can get it consistently wrong in certain scenarios. If you could just enable a special query mode that gives you absolute control over the QP for that query, it would solve a major pain point.
I'm not a database developer, and last time I researched this (a few years ago) I found many good reasons for not enabling this from postgres contributors. But it would still be very useful.
The problem is not the query planner per se. There is a much more subtle problem and it is related to how you have created the query in the join structure.
For many queries, the order in which you specify the joins doesn't really matter. But there are a number of classes where the join order dramatically affects how fast the query can actually run and nothing the query planner does will change this.
I came across this problem around 30 years ago. By accident, I discovered what the problem cause was - the order of the joins. The original query was built and took 30 - 40 minutes to run. I deleted particular joins to see what intermediate results were. In reestablishing the joins, the query time went to down to a couple of seconds.
I was able to establish that the order of joins in this particular case was generating a Cartesian product of the original base records. By judicious reordering of the joins, this Cartesian product was avoided.
If you are aware of this kind of problem, you can solve it faster than any query planner ever could.
Usually that kind of problem is a result of exceeding {from,join}_collapse_limit, which defaults to 8. If you have more tables than that in a query, Postgres doesn’t exhaustively try all ordering to determine the best, and instead uses its genetic algorithm, which can be worse.
You can raise the limit at the risk of causing query planning times going up exponentially, or refactor your schema, or, as you did, rewrite it to be more restrictive out of the gate. That way, those join paths will be found first and so will be the best found when the planner gives up.
Combination of two joins, filtering on all tables, and sorting by the LEFT one. Performance was fine, until we hit the scale when it suddenly became unpredictable.
In hindsight, given the variability of the queries and table structure, I don't think any query planner could have done a good job. The natural answer was to denormalize. But the journey to get there was a little unpredictable.
I’m currently learning the basics of this. Currently struggling with multiple similar scenarios where switching from a left to an inner join, or any equivalent, kills performance. But these are aggregation queries so there are only 5 records returned. I could just filter in my app code no problem. But why the hell does adding “where foo.id is not null” in SQL make it O(N*M)??? CTEs are not helping.
Every time stuff like this comes up I wonder how much the people having issues would be willing to share - because every time I've fought with the postgres query planner, it eventually turned out what I wanted to do had massively worse performance* because of something I didn't take into account that postgres did. And each time, once I learned what that thing was, I was able to fix it the right way and get the query plan I was expecting, but also with the performance I was expecting.
* Usually I've been able to force query plans by disabling whole operations for a session, such as disallowing "sort" to make it use an index. The real fix in this case, for example, was to use CLUSTER to re-order data on disk, so the correlation statistic was close to 1 and postgres wanted to use the index instead of table scan + sort.
It may use the same name, but reading https://dev.mysql.com/doc/refman/8.0/en/innodb-index-types.h... this doesn't really look like the same thing and wouldn't help here. InnoDB's clustered index (usually on the primary key) is used for fast row lookups by that primary key. It only has an advantage if the primary key is the order you want, but otherwise would have the exact same problem postgres's query planner was protecting me against, but without any way to fix the problem.
> In InnoDB, each record in a secondary index contains the primary key columns for the row, as well as the columns specified for the secondary index.
I think that unless you're only doing table scans, or your rows are inserted in no discernible order, you should see a speedup. If the latter though, then yes of course, page jumps are page jumps.
If it does an index scan on the secondary index, unless it's an index-only scan it'll still have to jump around to different pages on the clustered index, so it can't really take advantage of the disk cache unless the whole thing is cached. Or if there's a high correlation between the secondary index's order and clustered index's order, which is what the postgres CLUSTER command does.
If you hit the case I did, this will make your queries perform worse: The reason postgres insisted on table scan + sort was the random access time jumping around the disk and constantly invalidating the cache when it did an index scan. Using CLUSTER on the table made the index order match the table order so the index scan didn't jump around and there was no random access penalty - it instead worked with the disk cache.
If you're on an SSD and not a spinning disk, or have a lot of memory and can expect it to all be cached, there's a separate setting you can change to adjust the random access penalty - see random_page_cost on https://www.postgresql.org/docs/current/runtime-config-query...
No this is a fundamental concept in postgres. If you do EXPLAIN ANALYZE on a query, you get the query plan, which is influenced by the query, indexes, table structure, etc. But the QP may decide to do a silly thing like a sequential scan where a better path exists, and adding an index to avoid the scan would be cost prohibitive. So if you could just override the QP and say "Use this index and do this type of sort and then this type of scan, damn the consequences!" it would give the query writer full control. I don't understand why you can't just force the system to follow a path - you as the developer pay for it regardless.
Nobody does actually. Postgres not having deterministic query plans is a big pain and a good reason not to use it. The same query may use different query plans depending on the estimated number of affected rows, very frustrating.
It's been many years since I've had to use an Oracle db, but they definitely allowed SQL developers to forcibly specify a query-plan - called "query hints", wherein you could specify it to prefer using (or avoiding) certain indexes, certain join-strategies (hash, loop, etc) - this was done via comments immediately before the SQL query - see https://docs.oracle.com/cd/B13789_01/server.101/b10752/hints...
Postgresql doesn't have this, however, I've rarely missed this feature - tables with good indexes, regular db hygiene, etc, almost always perform excellently. The query planner is very, very smart nowadays.
The query hints thing is actually quite useful in an educational context.
There's DB internals/performance course which I'm a TA for and we lean heavily on being able to force Oracle's rule-based or cost-based optimisers --- because then we can get the students to analyse and compare the plans.
I still find myself comparing it to oracle on these points.
As a user, postgres is a far more ergonomic database, but things like this show good old insane oracle is still strongly ahead on some of these points.
In practice, postgres is not as bad as you'd expect in the large, but it can still be anoying.
Oh that’s unfortunate, thanks for explaining it. Postgres has been in my list to check out but haven’t done any personal projects that I’d need it for… yet
Just make sure you test for your scale and with representative data and queries. There will be various tipping points with any technology. But you can find them with experimentation.
The way I see it traditional databases are frameworks, and we need to switch to something more like libraries - use the high-level interface when we need it, but be able to dig underneath. Postgres has taken some small steps in this direction with e.g. making the parser available as a separate library; some newer systems (e.g. distributed-first datastores that combine LevelDB with some higher-level layer) go further.
MySQL offers this via use/force index (…). Similar problem where the QP will inexplicably decide one day to make a query really slow and you gotta override it.
We have a couple queries where using the correct index they'll take milliseconds, using the wrong index it'll take minutes - and mysql occasionally enough to be a noticeable load on the database chooses the wrong one and we've specified "USE INDEX" even though I really hate having to do so.
Someone who picked their tools with good tech judgement 25 years ago can be using the same today (eg PG, Python, Linux) without corporate control of them, it's pretty great.
That feels a bit like hindsight talking. Linux perhaps, but were Python and Postgres really the obvious good judgement choices 25 years ago? Every other choice was poor judgement?
In my personal experience of being around back then, postgres and python were still considered "technically better", but such a massive pain in the ass to install (especially on cheap shared hosting where it was often actually impossible to install) that only the most masochistic people would even try. I myself wrote a fastcgi implementation in PHP which would allow a web server which only supported php to call python under the hood and forward the inputs and outputs :P
It is kind of depressing that 25 years later, no other language has even attempted to compete with PHP in the “easy to get started on bargain-basement-tier shared web hosts” space D:
no other language has even attempted to compete with PHP
PHP has been very good at reinventing itself and being its own biggest competitor. Zend outcompeted PHP3, PHP5 outcompeted PHP4, PHP7 outcompeted PHP5 and so on. Compare a site written in modern idiomatic PHP8 using something like Laravel to a site written in classic PHP3 and they might as well be two different languages.
That's how you airdrop straight into the worst of the python 2 3 fiasco, probably the worst time in history to start using python, meanwhile when Ruby on Rails and Java were probably the kings of that era.
Around 20 years ago when I was trying to decide on what database to use my requirement were that it should store data reliably. I learned that MySQL in contrast to PostgreSQL:
1. wasn't ACID
2. didn't have foreign key constraints
3. could loose/corrupt committed data (no WAL)
Despite me not knowing much about databases it seemed like an obvious choice.
Hell, 10 years ago (in a multi database project) I got bit by MySQL not supporting check constraints, but returning successfully when I ran the create check constraint statements.
It finally supports them, which is nice. It doesn't support deferred checks to the commit, like postgres, but are otherwise good. I've appreciated using it to enforce json schema at the db level
MySQL didn't have transactions (for many years after that), so PG over MySQL would have been the case for "good technical judgement" though maybe not the majority choice. For Python, maybe I'm biased - but you could have went for PHP in the argument if you swing that way and it still works.
This solves the same problem, just not as well, the idea of an incrementaly maintained view is that only the update needs to be computed, so a count will increment or decrement as rows are inserted or deleted.
It means complex views that could take minutes or hours to calculate from scratch can be kept fresh in realtime.
I’ve built a few of these in snowflake for time series data (with dbt, which is fine but I don’t love it). My feeling so far is that the incremental bits are kinda fiddly and often domain-specific. Is it possible to define a single “shape” that would solve >90% of these incremental refresh scenarios?
pg_ivm has a ton of restrictions though (as do other PG incremental view refresh mechanisms, like timescale hypertables).
That's why I assume it's not in vanilla postgres: adding the future with so many caveats would not make for a great experience for the full breadth of postgres users.
I'm not familiar with ClickHouse materialized views, but the other tech you list (as I roughly understand them) seem more concerned with streaming SQL, which is a related but different end user experience from incrementally refreshed materialized views.
In the same vein as “is your product a business, or is it just a feature”, Postgres has really raised the bar to “is your product a database or an index in postgres”. There’s a few databases that make compelling cases for their existence, like Cassandra or Elastic/Solr, but surprisingly many databases really don’t offer anything that can’t be replicated with a GIN or GIST on Postgres. It is the amorphous blob swallowing up your product and turning it into a feature. JSON handling or json column types, are no longer a distinctive feature anymore, for example.
And a surprising amount of other stuff (similar to lisp inner platforms) converges on half-hearted, poorly-implemented replications of Postgres features… in this world you either evolve to Cassandra/elastic or return to postgres.
(not saying one or the other is better, mind you… ;)
Postgres is still single-node-first, and while Citus exists I'm skeptical that it can ever become as easy to administer as a true HA-first datastore. For me the reason to use something like Cassandra or Kafka was never "big data" per se, it was having true master-master fault tolerance out of the box in a way that worked with everything.
If you are going to multi-node Postgres, you need to start planning for Cassandra/Dynamo.
That is a BIG lift. Joins don't really practically scale at the Cassandra/Dynamo scale, because basically every row in the result set is subject to CAP uncertainty. "Big Data SQL" like Hive/Impala/Snowflake/Presto etc are more like approximations at true scale.
Relational DBMS is sort of storage-focused in the design and evolution: you figure out the tables you need to store the data in a sensible way. They you add views and indexes to optimize for view/retrieval.
Dyanmo/Cassandra is different, you start from the views/retrieval. That's why it is bad to start with these models for an application because you have not fully explored all your specific data structuring and access patterns/loads yet.
By the time Postgres hits the single node limits, you should know what your highest volume reads/writes are and how to structure a cassandra/dynamo table to specifically handle those read/writes.
But that's the gist of the article here, right? That Postgres is taking over all db-like use cases. It doesn't claim that it can replace Kafka but https://www.amazingcto.com/postgres-for-everything/ certainly does. Of course it's not a full replacement, but it might be good enough.
If your premise is that Postgres is eating the datastore world, then you're talking about using it as a replacement for Kafka and Cassandra.
Frankly if you zoom out far enough they're all systems suitable for use as your primary online datastore that you build your application on (each with their own caveats of course). There are places where they compete.
latency increase is likely payment for distributed consistency regardless of specific DB: doing consensus between nodes is much slower than dumping block of data on local nvme.
I am not aware of such machine in a single Node unless it is talking about vCPU / Thread. Intel Sierra Forest 288 Core doesn't do dual socket option. So I have no idea where the 512 x86 core came from.
>This year’s new EPYC 9754 goes even further, offering a single CPU with 128 cores and 256 threads. This means a standard dual-socket server could have an astonishing 512 cores!
The author pulled it from an article linked in the previous sentence. The numbers don't even add up unless it was a mistake or I'm missing something.
I think the terminology is a bit muddied here because AMD has historically referred to physical cores as "modules" and logical cores as "cores" (although their current spec sheet [1] seems to use "cores" and "threads" in the way that most understand them).
So in a dual-socket setup, 2 x EPYC 9754 would indeed yield 512 threads (logical cores), which are backed by 256 physical cores.
I'm using EF Core with SQL Server and PostgreSQL for two different production projects; it works like a charm, and the performance is great.
All recent projects in my company are PostgreSQL based (> 2000 production applications), and we have far fewer troubles with PostgreSQL than with Oracle, not to mention the licensing.
I have used it in multiple projects and it works great. The Npgsql.EntityFrameworkCore.PostgreSQL provider is always quickly updated when new versions of EF Core are shipped, and its primary maintainer @roji is also extremely talented and responsive. All around has been a pleasure to work with!
I'm managing two medium sized projects on that stack, using both EF core and Dapper. I'm about to switch a third over from Oracle because I'm so tired of all the issues with Oracle.
>As DuckDB’s manifesto “Big Data is Dead” suggests, the era of big data is over.
I have been stating this since at least 2020 if not earlier.
We are expecting DDR6 and PCI-E 7.0 Spec to be finalised by 2025. You could expect them to be on market by no later than 2027. Although I believe we have reach the SSD IOPS limits without some special SSD with Z-NAND. I assume ( I could be wrong ) this makes SSD bandwidth on Server less important. In terms of TSMC Roadmap that is about 1.4nm or 14A. Although in server sector they will likely be on 2nm. Hopefully we should have 800Gbps Ethernet by then with ConnectX Card support. ( I want to see the Netflix FreeBSD serving 1.6Tbps update )
We then have software and DB that is faster and simpler to scale. What used to be a huge cluster of computer that is mentally hard to comprehend, is now just a single computer or a few larger server doing its job.
There is 802.3dj 1.6Tbps Ethernet looking at competition on 2026. Although product coming through to market tends to take much longer compared to Memory and PCI-Express.
AMD Zen6C in ~2025 / 2026 with 256 Core per Socket, on Dual Socket System that is 512 Core or 1024 vCPU / Thread.
The threshold for Cassandra / Dynamo scaling is increasing is probably the only point. "Big data is dead" is pretty stupid to say, typical clickbait marketing by a database that will probably be chucked away by something else trendy in another year.
But at a certain point, a 10,000 core 5 petabyte single megamachine starts to practically encounter CAP from the internal scale alone. It already ... kind of ... does.
And no matter how big your node scales, if you need to globally replicate data ... you have to globally replicate it over a network, and you need Cassandra (DynamoDB global replication is shady last I looked at it, I have no idea how row-level timestamps can merge-resolve conflicting rows updated in separate global regions)
I have a handful of sites I run on a VPS with a basic setup, including MySQL.
One thing I've always liked about MySQL is that it pretty much looks after itself, whereas with Postgres I've had issues before doing upgrades (this was with brew though) and I'm not clear on whether it looks after itself for vacuuming etc.
Should I just give it a go the next time I'm upgrading? It does seem like a tool I need to get familiar with.
25+ years ago MySQL was fast and easy to admin but didn't have rollback and a bunch of other features. At the same time Postgres had the features but was horrible for performance and usability. Those days are LONG gone. Mysql obviously has all the features and PG is great to admin and the auto-vacuum works well out of the box.
I run a bunch of clusters of pg servers around the world and they need almost no maintenance. In place upgrades without needing to go the dump/restore route work well, 5 minutes on a TB sized database, just make very VERY sure you do a reindex afterwards or you will be in a world of pain.
Postgres updates are definitely a pain. MySQL is usually just a matter of upgrading the package and restarting the server for the projects I run, but postgres is a full dump and import process.
Great article! We also tried lots of databases in the past at SWCode, but then ended up using Postgres for almost all our usecases. There really must be a good argument for using something else which can‘t be done with some Postgres extension.
Yes, the title is click-baity. Yes, Postgres isn't perfect and not the "best" choice for every possible use case in the universe.
But Postgres is a work of art, and compared to all the other relational database options, if it's ultimately crowned the king of them all, it'd be well deserved.
I'd also say that the PG protocol and the extensions ecosystem are as important as the database engine.
That would surely be impossible. TB is designed from the ground up to do one thing and do that one thing well. Postgres is a swiss army knife, and TB is a knife.
This post was very wrong and misleading on multiple points.
I have seen a lot of people praising Postgres over e.g. MariaDB. But more often than not it seems to be people how lack knowledge.
Take this linked post, where the author points out "The untuned PostgreSQL performs poorly (x1050)" later followed by "This performance can’t be considered bad, especially compared to pure OLTP databases like MySQL and MariaDB (x3065, x19700)".
Frist of all, those are not pure OLTP databases. And if the author took a better look at the benchmark he would see that MariaDB using ColumnStore is at x98. That's 10x the performance of Postgres out of the box, and 200x faster than the author stated.
Having used Postgres for many projects, yet never having used any of the other tools in the ecosystem, I'm surprised by how many tools there are!
How does one go about finding paying customers when developing a new database tool? How does one figure out the size of the market, and pricing structure?
It's not a best practice, it's a fad. 99% of people who recommend or use Postgres barely know how to use it. Another trendy database will come along and you'll stop seeing all these posts about it. Happens every decade. I'll link back to this post in a few years with "I told you so".
Another trendy database will come along and you'll stop seeing all these posts about it. Happens every decade.
And then after a couple of years people will realise that Postgres can do everything the trendy database can do and come back to Postgres. Happens every decade. This is at least 'hype cycle' 3 for Postgres since I started my career.
> 99% of people who recommend or use Postgres barely know how to use it.
You're not wrong here, although you could just as easily say "99% of people who recommend $DB barely know how to use it."
Databases remain a mysterious black box to entirely too many people, despite the three largest (SQLite, Postgres, MySQL) being open source, and having extensive documentation.
I've come to the conclusion that most devs don't care about infra in the slightest, and view a DB as a place to stick data. When it stops working like they want, they shrug and upsize the instance. This is infuriating to me, because it's the equivalent of me pushing a PR to implement bogosort, and when told that it's suboptimal, dismissing the criticism and arguing that infra just needs to allocate more cores.
My god, are you me? I also thoroughly enjoyed this diatribe from [0]:
> First, a whole army of developers writing JavaScript for the browser started self-identifying as “full-stack”, diving into server development and asynchronous code.
> ...early JavaScript was a deeply problematic choice for server development. Pointing this out to still green server-side developers usually resulted in a lot of huffing and puffing.
- The codebase is old and huge, accruing some heavy technical debt, making it a less than ideal foundation for iterating quickly on a new paradigm like AI and vector databases.
- Some ancient design decisions have aged poorly, such as its one connection per process model, which is not as efficient as distributing async tasks over thread pools. If not mitigated through an external connection pooler you can easily have real production issues.
- Certain common use cases suffer from poor performance; for example, write amplification is a known issue. Many junior developers mistakenly believe they can simply update a timestamp or increment a field on a main table with numerous columns.
So, yes, PG is one of the best compromises available on the database market today. It's robust, offers good enough performance, and is feature-rich. However, I don't believe it can become the ONE database for all purposes.
Using a dedicated tool best suited for a specific use case still has its place; SQLite and DuckDB, for instance, are very different solutions with interesting trade-offs.