Hacker News new | past | comments | ask | show | jobs | submit login

This looks very good but after using Cassandra I'd be scared of bumping into features that are not supported or suddenly very slow if used. For example I'm sure sorted aggregations and group by queries across partitions will take a long time.

I think there should be a list of Postgres partitioning gotchas somewhere to accompany this, e.g. partitioning goes hand in hand with denormalisation (due to the effects of the chosen partition key being so important). With Cassandra I can simply add more machines as I increase the data duplication, I'm not sure how easy it is to rebalance data when adding new Postgres instances?




I'm not sure what you mean by "adding new Postgres instances." Postgres is not a distributed database, at least not on it's own. There are ways to scale using e.g. hot standbys (streaming replicas for read-only queries) and multiple projects that implement distributed databases on top of Postgres (Citus, Postgres-XL, ...). But then it depends mostly on that project, not on Postgres.

Sure, Partitioning does negatively affect some queries. But the thing is - it's a significant improvement compared to the previous partitioning implementation, which had almost no insight into the partitioning rules. And so optimizer could not really do advanced tricks (e.g. partition-wise joins) etc.


I'm not sure what the point of partitioning would be if the data isn't distributed between nodes? I'm probably not understanding.

All I'm saying is I can go and read about the pain points in say Cassandra for this type of stuff because of where it comes from and it tries to protect you from doing things that are slow. With Postgres I think it's going to be harder to know when I can't use a feature that's suddenly going to kill my performance.

It would be great if there was a resource for how to avoid such issues in partitioned Postgres but it'd take a lot of work to write such a guide - I'm guessing things like JSONB queries, Geo, Transactions, Aggregations across partitions and many extensions are all going to cause pain points?


EDIT: oops, link updated. Thanks justinclift!

Performance, mainly.

Partitioning mean your data can be spread across more physical areas (or FS partitions, or whatever) at lower level, giving the database (and you) more control of performance by optimising what gets stored where at a lower level.

Official pg doc here: https://www.postgresql.org/docs/10/static/ddl-partitioning.h... - the pertinent bits are as follows:

---

Partitioning can provide several benefits:

- Query performance can be improved dramatically in certain situations, particularly when most of the heavily accessed rows of the table are in a single partition or a small number of partitions. The partitioning substitutes for leading columns of indexes, reducing index size and making it more likely that the heavily-used parts of the indexes fit in memory.

- When queries or updates access a large percentage of a single partition, performance can be improved by taking advantage of sequential scan of that partition instead of using an index and random access reads scattered across the whole table.

- Bulk loads and deletes can be accomplished by adding or removing partitions, if that requirement is planned into the partitioning design. ALTER TABLE NO INHERIT and DROP TABLE are both far faster than a bulk operation. These commands also entirely avoid the VACUUM overhead caused by a bulk DELETE.

- Seldom-used data can be migrated to cheaper and slower storage media.

The benefits will normally be worthwhile only when a table would otherwise be very large. The exact point at which a table will benefit from partitioning depends on the application, although a rule of thumb is that the size of the table should exceed the physical memory of the database server.


Might be better to use the v10 version of that doc, instead of the v9.1 one:

https://www.postgresql.org/docs/10/static/ddl-partitioning.h...

9.1 is unsupported these days, etc. :)


Okay - so the partitioning gives better per machine performance by allowing you to parallelise certain things and spread out on disk. I thought this was similar to Cassandra's ring based partitioning that allows data to be distributed around many machines. Thanks for clarifying.


That will eventually be coming in some form. In the OP link, there was a mention of foreign tables participating in partitioning: > While foreign tables can be partitions in v10, they are currently subject to an unfortunate limitation: tuple inserts that route to a foreign table will fail, since this feature has not been implemented yet. Etsuro Fujita (NTT) is working to address this limitation by revising code originally written by Amit Langote.

So there is work on getting this to scale on multiple machines, rather than just single machine. We're just not there yet.


Partitioning is not about distribution, it's about carving up the data to improve performance.

For example, let's say you store a table "events", where each event has a timestamp. You could partition this by day, for example. Every day would get its own physical table. If you have a year's worth of data and have a query that only needs data from a single day, it would only need to look at 1/365 of the complete dataset. Both tables and indexes would only need to capture the subset of data for each partition. Row selectivity tends to become problematic for large datasets.

You can of course do this stuff manually — create one table per day, make sure you insert into the right table, always select from the right tables based on which dates you're looking at — and people do this. What partitioning brings to the table is automation; the partitions are still "normal" tables, but Postgres will automate them. You just define the partitioning keys/functions, and Postgres can both handle insertion (you insert into the "parent" table) and the querying (you just include your partitioning column in your query, and it will figure out which tables that need to be accessed).

There are solutions (such as Citus) that allow you to partition across multiple Postgres nodes.


This is a brilliant example and explanation. I can see why you would want to partition when you have lots and lots of data now without multiple machines. I know more about using Cassandra or Risk for this stuff than an SQL database.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: