Hacker News new | past | comments | ask | show | jobs | submit login

To give some anecdata: I have been bitten twice by going to a .0 release with Postgres, once by index corruption and once by some subqueries returning wrong results in some cases.

I have since decided to always wait for a .1 with Postgres before updating.

The good news is that this is only a few weeks past the initial release.

Now if only more people upgraded during the RC phase already, then everybody could go to a .0 release.

And conversely, if everybody follows my (and your) advice, then .2 will be the new .1.

It’s never easy




Depends on how great you trust the quality of those released. `.1` is usually good enough for PSQL except the following issue.

14.4 was released with a fix on silent data corruption when using the CREATE INDEX CONCURRENTLY or REINDEX CONCURRENTLY commands.

https://www.postgresql.org/about/news/postgresql-144-release...


That was a nasty one, but then again, my recommendation would still be to use `pg_repack` over reindex concurrently because that one also gets you concurrent clustering and that one was not affected by this bug.

I'm not downplaying the issue and index corruption is really bad, but I would wager a guess that admins who do need concurrent reindexing would also be aware of `pg_repack` and would prefer that anyways because of the other benefits it provides.

This is probably why it took 6 months for the issue to be reported and fixed.


What advantage does pg_repack have when you only rebuild an index? Or do you mean it has advantage when pg_repack is run on the entire table?


pg_repack has some significant downsides in its implementation; I question whether it’s really a default over re-indexing concurrently. I’ve certainly not gotten that impression, and we maintain many very large Postgres clusters.


Interesting! Were the queries reliably returning wrong results?


it was in the 2012/2013 time frame, so I can't find the relevant release note any more, but it was reliably returning wrong results for a specific sub query pattern.

Not all of them were broken, but the broken one was returning wrong results 100% of the time.

Index corruption shows the same symptoms, but, of course, it is a different cause.


It sounds like Postgres's test coverage could be better ...


Yes, PostgreSQL needs more tests, but no it is not really a coverage issue. Most of the serious bugs have been related to concurrency or other things which simple test coverage can find. Finding these bugs is usually not trivial.


This is something foundationDB does pretty well. They built a simulator that tests such things. Doubt you could port to Postgres easily though.


That is exactly the kind of tools PostgreSQL needs more of. There are some tools but more are needed. More plain old test coverage will not help much if at all.


You meant "cannot" right:

> which simple test coverage cannot find.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: