Hacker News new | past | comments | ask | show | jobs | submit login
Why Postgres (craigkerstiens.com)
91 points by jashmenn on Sept 6, 2012 | hide | past | favorite | 83 comments



I love Postgres, and I usually get grumpy when forced to use Mysql, but ... rather than lots of fluffy features (most of the stuff listed here is not that compelling taken individually), the thing for me about Postgres is that it's solid, reliable, and dependable. It behaves correctly. It has transactions. It has DDL transactions. It can, by and large, be counted on to get things right, rather than take shortcuts. It's a well-crafted tool that you can get a lot of mileage out of if you know what you're doing.


In other words, it is what you find yourself sorely wanting when you use MySQL for a non-trivial length of time.


Well, yeah, that or some inanimate object to vent my frustrations on by kicking it


Oh yes, transactions…


Interesting articles. I read part I and part II.

As a long-term mysql user who has dabbled with production Postgres machines I still don't see how Postgres really has any huge advantages that would cause me to want to run it for any new projects and deal with a new and possibly daunting learning curve. Postgres seems to do some things better perhaps, and in general seems to have more flexibility than MySQL. But it also seems to lack focus as a end-to-end db solution.

The clustering / replication complexity of Postgres is a huge issue for me, since scaling databases is not a trivial thing in the best of cases. With postgres, there seem to be 10+ different cluster / replication systems (both open-source and commercial?) and it's just a mess.

Giving people tons of options is great, but ease of use for developers and an amazing & simple out of box experience is so important. MySQL won the hearts and minds of developers this way... and now MongoDB is doing the same thing by improving on what MySQL what able to accomplish. The Postgres team should consider implementing something similar.


I find postgres more pleasant and faster to develop in than mysql. It can help you avoid the need for a lot of application code, and feel more confident in your solution. I can't say any one thing that will convince you; you basically have to dive in and try doing things the "postgres way" (i.e. don't try to transplant a mysql mindset on postgres, or you will be disappointed). Try out the various features available, and see which ones might help you develop faster or avoid problems.

Postgres has strong support for single-master replication, including synchronous (several levels) or async, cascading, etc. Read scaling and HA are well covered.

What postgres doesn't have built-in is sharding or multi-master. I'm not trying to make excuses for this, but here are some thoughts to put things in perspective:

* There is active development on multi-master in postgres.

* You need to have quite a large database to not physically fit on a single node.

* By the time you really need to scale out writes beyond a single node, you may be very interested in the fine details of what's happening, and the flexibility and myriad options available in postgres may be just what you need.

* If you focus too much on one feature, it starts to seem more important than it is. It's easy to forget the flaws in the feature in other products; and the fact that businesses existed at high scale before the feature existed. Lack of a feature you want is more like a hurdle, not a dead-end.


>You need to have quite a large database to not physically fit on a single node.

If you are hosting in the cloud then pricing favours lots of smaller instances rather than single bigger ones. And many startups with limited budgets (e.g. me) still have significant data requirements.

> By the time you really need to scale out writes beyond a single node, you may be very interested in the fine details of what's happening, and the flexibility and myriad options available in postgres may be just what you need.

Or you can press a single button on most newer databases e.g. CouchDB, HBase and MongoDB and have sharding/clustering just work.

I don't know why people defend PostgreSQL on this and aren't pushing for a more coherent, in-built solution. It would definitely blunt a lot of the growth of NoSQL.


"I don't know why people defend PostgreSQL on this and aren't pushing for a more coherent, in-built solution."

I specifically said that I wasn't trying to make excuses, and that people are actively working on built-in multi-master replication.

I encourage you to keep pushing though. People have been finding weaknesses in postgres for a long time and those weaknesses have been disappearing quickly with releases delivered every year. All the while, some great innovations have been coming along that no other database has.

I've been involved in postgres for a long time and it's always interesting to think back on the way features have been demanded and then delivered. After multi-master, there will be another round of must-haves. But in the meantime, it's important to also work on new innovations that might ultimately be more important to the cloud use case than multi-master is.


One thing to consider is that because of its MVCC architecture (whereby reads can never block writes, and writes can never block reads), and especially with the locking changes coming in 9.2, you can probably get more concurrent activity out of a single, well-tuned Postgres instance than you could a slew of MySQL hosts of comparable size.

That wards off your need to go down the (admittedly sometimes complicated) path of PostgreSQL replication, and I say that as a guy who basically makes his living off setting up replicated/HA Postgres...


Figure I should provide some info rather than just downvote: http://dev.mysql.com/doc/refman/5.5/en/innodb-multi-versioni...


I'm well aware that InnoDB is MVCC-based. Unfortunately, its implementation of MVCC is hindered by the way the MySQL kernel handles locks, which are completely outside the domain of a storage engine.

Consider:

  -- on MySQL, be sure to say "engine=innodb" before the semicolon...
  CREATE TABLE locks_suck (id int primary key, val text);
  -- or however you'd say "generate_series(start, end)" in MySQL...
  INSERT INTO locks_suck (id) VALUES (generate_series(1, 10));
Now, say the following in one session:

  BEGIN;
  UPDATE locks_suck SET val = 'Transaction 1' WHERE id BETWEEN 1 AND 5;
  -- Note that I haven't committed yet...
And then, in a second session:

  BEGIN;
  UPDATE locks_suck SET val = 'Transaction 2' WHERE id BETWEEN 6 AND 10;
In MySQL, you'll hang for a moment, and then see:

  ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
In Postgres, you'll see:

  UPDATE 5
That's the kind of concurrency MVCC can buy you. Readers can't block writers, writers can't block readers, and writers can only block other writers trying to write the same row.


Hey thanks for the example, I think your phrasing was a little unclear but clearly you know what you're talking about, so I reversed my downvote into an upvote. [edit: crap now I can't do that the buttons gone]

I also tried to look into the behavior and it turns out what we're running into here is a 'gap lock' necessary to support ranges in statement based replication (the default). You can change the behavior if you're using row based replication its just not the default.

For instance if the second transaction in your example was "BETWEEN 7 and 10" then both transactions would run concurrently no problem, its acting like row+1 level locking.

Here's a much more detailed explanation: http://www.mysqlperformanceblog.com/2012/03/27/innodbs-gap-l...


This appears to be an issue on how InnoDB handles range row locks, not MySQL (since it's InnoDB that's issuing that lock contention warning). It doesn't happen if you're not using ranges:

i.e. issuing in session 1

    BEGIN;
    UPDATE locks_suck SET val = 'Transaction 1' WHERE id = 1
and in session 2

    BEGIN;
    UPDATE locks_suck SET val = 'Transaction 2' WHERE id = 2
doesn't block. These locks are not issued at the MySQL layer when locks_suck is an InnoDB table, they are issued by InnoDB itself (MySQL doesn't actually handle transactions itself, so it would have to be the storage engine).

See dev.mysql.com/doc/refman/5.0/en/innodb-lock-modes.html for more info on how InnoDB issues those locks.


By choosing Pg over MySQL or Mongo, you are selecting a high-quality, stable product that is on a very slow and steady development trajectory with very few missteps along the way.

If you don't want to be surprised by the interesting and unexpected ways your database functions, choose Postgres.


  > If you don't want to be surprised by the interesting and
  > unexpected ways your database functions, choose Postgres.
If you don't want to be surprised how your database works—learn about it. Does not matter MySQL or Postgres. Too much of handwaving here.


> slow and steady development trajectory

Slow being the operative word. Sharding and JSON support should have been added as basic features years ago.

Now if you need either (very common) PostgreSQL is a non-starter.


"Now if you need either [Sharding or JSON] (very common) PostgreSQL is a non-starter."

Instagram used sharded postgres, and they certainly started up.

http://instagram-engineering.tumblr.com/post/10853187575/sha...

Native JSON support is coming in 9.2 (being released very soon) and will rapidly improve.

PostgreSQL is being used by startups and major enterprises alike to great effect. If you have any interest in it, I encourage you to dive in to some real problems and see how postgres can help you solve them.

Yes, your criticisms may be valid, and postgres is always improving. But if you keep an open mind and actually try to solve real problems with it using all of the tools it has to offer, you may be pleasantly surprised.


Patently false.

Skype was supporting tens of millions of concurrent active users on a sharded PostgreSQL setup years ago (like, 2006-07 or so). Yes, the support wasn't native, and they had to write PgBouncer and PL/Proxy to enable that kind of scalability, but they Open Sourced both projects, and they're pretty widely used in many environments for exactly that purpose.

As far as JSON, the HStore extension has been available some time since 8.3, which was released in 2008. Again, not native (though that's being addressed with 9.2, which should drop any time now), but not particularly difficult to use. A trivial web search shows people using JSON with HStore at least as far back as 2010, if not earlier.


Not false.

Skype added sharding support themselves. And it requires you to use stored procedures instead of SQL making it unusable for most users.

And I am talking about native JSON support as a first class citizen. Nobody is going to lock their entire data structure to a third party extension that may or may not disappear.


"And I am talking about native JSON support as a first class citizen."

Wait a week for 9.2, please.


If your intention is store data structures in JSON form -- ie pretty much a pure graph database -- then maybe a relational database engine is a bad fit anyhow.


It's a lot nicer, but I see switching technologies as being a formula: if the benefit minus the cost is greater than zero, switch. The benefits can be gigantic and it can still be better not to switch if the costs are high enough. Your bank still runs COBOL, after all.

Most of the people writing these articles are in the best places to switch: noodling on the side, between gigs, or they're students. They're not running an enterprise supporting millions of litigious customers on hundreds of servers running millions of lines of proprietary software that all depend on MySQL. It would be absolute lunacy to switch in that case; even if Postgres were twice as fast, used half the space and cut your costs in half, it would still be hard to justify the cost and danger.

I think Postgres is a lot better, and anybody for whom the equation justifies it should switch. But that isn't going to be everybody every time, and that's fine, because we should make these decisions rationally rather than because it's cool now or it looks like it will be fun.


PostGIS is an extension of Postgres that turns it into the best geo-spatial database out.

http://postgis.refractions.net/



Another cool feature: recursive queries using the WITH common table expressions.

Say you have a table fs <id, name, parent_id> representing a hierarchical filesystem, and you want to print the full path of each file, here's how you can do it in a single query:

  WITH RECURSIVE path(id, name, parent_id, path, parent) AS (
    SELECT id, name, parent_id, '/', NULL FROM fs WHERE id = 1 -- base case
    UNION
    SELECT fs.id, fs.name, fs.parent_id, parentpath.path || 
      CASE parentpath.path WHEN '/' THEN '' ELSE '/' END || 
      fs.name as path, parentpath.path as parent
      FROM fs INNER JOIN path AS parentpath ON fs.parent_id = parentpath.id
  ) SELECT id, name FROM path;


I don't know the details but won't that be quite inefficient? (it's not a construct I've seen before - never used Postgres in anger)

You're effectively working with a tree and there are much more relational friendly ways of doing that in SQL.

I know I'm just picking on this specific use but I can't help imagining that recursive querying will always be slow. Would love to hear how it's implemented if that's not the case.


All that WITH RECURSIVE does is allow a WITH query to refer to its own output. Beyond that there is no overhead.

Here is the query plan I got for my above query:

  QUERY PLAN
  CTE Scan on path  (cost=292.79..304.41 rows=581 width=104)
    CTE path
      ->  Recursive Union  (cost=0.00..292.79 rows=581 width=72)
          ->  Index Scan using fs_pkey on fs  (cost=0.00..8.27 rows=1 width=40)
                Index Cond: (id = 1)
          ->  Hash Join  (cost=0.33..27.29 rows=58 width=72)
                Hash Cond: (public.fs.parent_id = parentpath.id)
                ->  Seq Scan on fs  (cost=0.00..21.60 rows=1160 width=40)
                ->  Hash  (cost=0.20..0.20 rows=10 width=36)
                      ->  WorkTable Scan on path parentpath  (cost=0.00..0.20 rows=10 width=36)


But that could end up being quite deep couldn't it? Is it not like stacking up an unknown number of correlated queries? Could you even screw it up and have an infinite joining condition?


The main advantage of using common table expressions is the improved readability and ease in maintenance of complex queries, after a while using them coming up with a solution for a complex query is quite easy. Regarding the performance of them, it depends on what you are trying to accomplish, some times theres some performance penalties but in my personal experience (using them in SQL Server) I have never run into a case where the performance isn't good, well that's not entirely true, in cases where you need to return large datasets CTE are never the best solution.


If you have self-referencing rows, you're going to wind up with two options: an inefficient recursive query or inefficiently issuing N+1 queries. The recursive query would wind up being faster simply because there's a lot less overhead. That said, I don't know what additional optimizations or penalties are going on in the system, but I have never converted a situation from N+1 queries into a recursive query and found a performance degradation.

Of course other options should always be considered. Joe Celko has a book on storing trees in the database I've been meaning to pick up.


Its a reasonable book from memory, although there are problems with all the methods.


No doubt - the various tree methods all have their drawbacks too (more to manage when manipulating the tree).

It's all going to depend on your usecase but in general these sorts of path operations tend to be more read and less manipulation. You'll almost certainly get much faster lookups if you're not using recursive queries (as you can normally just use an index).


With all due respect, this is one of those "it's impressive because it was done at all" sorts of things. Seriously, if this is what your storage system forces you to do to compute the equivalent of:

   path(x) { return x ? path(x->parent) + "/" + x->name : "" }
... then you're using the wrong storage system. Yikes.


It's true, it is kind of a PITA. Users of our VLA observation preparation tool can nest scans inside scan loops, and this is represented in the database with the self-referencing PK. In the tool we never really need to do the nasty recursive select, but occasionally I need to do them to do reports, and it's never a great joy.

That said, I'm glad I have the power, and I wouldn't throw away Postgres and switch to something else just because something else might store hierarchies more naturally. Postgres is not the perfect tool for every use case, but having hierarchical data by itself isn't enough reason to throw it away.


My favorite thing to do with recursive WITH is the mandelbrot set. I used that one as an example on http://embedclip.herokuapp.com/


Could someone that's really familiar with Postgres give me the skinny on replication? I like Postgres a lot, but finally moved all of our data storage to MySQL because, according to the documentation I read just a couple months ago, Postgres couldn't really do multi-master replication.

This post, dated prior to the documentation I read before the switch, seems to suggest it would do it just fine as long as it was asynchronous. I'm not clear though on just how ugly its async replication might be (http://www.postgresql.org/docs/8.4/static/high-availability....).

I'd like to use dbmail to store mail in a database replicated across multiple servers (which would need fairly reliable replication), as well as syslog and other logging facilities to sql (where reliability isn't quite so big of a deal).

Is Postgres up to this now, or not?


No, Postgres' built-in replication doesn't support multi-master.

With one of the many third-party replication packages that have grown around PostgreSQL (specifically, Bucardo v5 which is in beta right now), you can do multiple — as in more than two — writeable masters, though, as the linked page notes, the replication is asynchronous. With older versions of Bucardo, you can do two masters — again, asynchronously (though you can technically get more than two if you want to engage in some significant configuration gymnastics).

What they're calling "synchronous multi-master" replication is actually implemented using "two-phase commit", which isn't replication per se, but rather application code using a database feature that allows specifically crafted database interactions to be written to multiple databases, and only successfully committing if the participating nodes agree that they've all committed.


> What they're calling "synchronous multi-master" replication is actually implemented using "two-phase commit", which isn't replication per se, but rather application code using a database feature that allows specifically crafted database interactions to be written to multiple databases, and only successfully committing if the participating nodes agree that they've all committed.

I am not sure I understood, what is the difference between the two?


As commonly defined, "replication" is something that happens in, and is managed by the database itself, or a daemon written for that purpose (whether the daemon works at the logical level, replicating database tuples, or at the physical level, replicating disk blocks), and of which an application can be largely, if not wholly agnostic. With two-phase commit, OTOH, an application has to be specifically written to take advantage of that functionality.


What happens if one of the nodes goes offline? System stops accepting requests or it keeps going and ends up in an inconsistent state?


Failure modes are defined by the two-phase commit protocol, the standard way of managing distributed database changes:

http://en.wikipedia.org/wiki/Two-phase_commit_protocol


You should use the "this page in other versions" mechanism at the top of the page you linked to so you can see more recent documentation: most of what PostgreSQL has with regards to replication is from versions more recent than 8.4 (the version you were reading).

http://www.postgresql.org/docs/9.2/static/high-availability....

(I'm curious why you want to do replication for dbmail: it would seem like what you really need/want is partitioning; e-mail metadata, especially with effective indexes, requires a lot of write throughput, and synchronous replication is going to largely cause the same write load on all systems.)


Thanks, I hadn't even noticed that link at the top.

(To answer your other question: at the moment our mail load isn't severe enough to pose a problem for synchronous replication, and I'm just aiming for the ability to have multiple mail hosts share the same mail data with no single point of failure. As the mail load outgrows current infrastructure, I'll start partitioning -- although I haven't figured out how to do that just yet. I've considered a distributed file system, but hammer has only just recently started to look like it's up to the task.)


You can accomplish that with synchronous multi-slave replication with failover: as far as I can work out in my head right now, the result will be similar to the behavior you will get from a synchronous multi-master setup assuming you can work things out so the slave can be used for read-only queries.


I don't think I can do that without writing my own mail daemons for pop and imap, unfortunately, which I'd rather not do at this stage. All of the ones that I know of expect to be able to read/write metadata over the same connection to the same database or filesystem.

Failover setups aren't my favorite option. They seem to be easy to get wrong, and when you get them wrong, they only do wrong things at the exact moment that you most need them to be doing right things.


So, if you are doing synchronous multi-master you will also need to do explicit failover (in this case, not from server A to server B, but from server A/B to only server B): otherwise, a partition between the servers would be catastrophic to your data integrity (as there would be no way for the servers to know whether they should start accepting data that might be different from its buddy, as both think the other is offline).

Once you start thinking in terms of multiple servers (whether it be based on replication or partitioning) you have to start thinking about these kinds of complex corner case issues, as you have moved from working with "a server" to "a distributed system", with all of the associated theoretical limits (such as CAP).


You're right, although the MySQL binary log used for replication handles this fairly gracefully for reasonable outage periods. Collisions are still possible after a resync, but it tries pretty hard to resolve them using timestamps on the transactions.

I'm working on some software to automatically manage outages, deploying new server instances, re-syncing databases, etc., but that's quite a few steps away from where we're at right now. For near-term purposes, anything that could do as good of a job at multi-master replication as MySQL can would be just fine.


I can then only wish you luck in your attempt to take an off-the-shelf system (dbmail) that was designed to be used with an ACID database and plop it on top of what is now an only eventually consistent data store without first rewriting it to tolerate those semantics ;P.


> You should use the "this page in other versions" mechanism at the top of the page you linked to so you can see more recent documentation: most of what PostgreSQL has with regards to replication is from versions more recent than 8.4 (the version you were reading).

I wonder if there's a way the Postgres guys could fix that with some Google-bot directives or something. Usually, the linked page when searching is for an older version, and you have to go click 9.X yourself if you're interested in the latest and greatest.


I'd say that all in all that multi-master is still generally pretty hard to use -- there are packages that do work very well (consider the .org registry, which is slony), but I can't really say with conviction that they are cohesive and easy to use.

You can use synchronous replication in the new version to ensure commits have been flushed to standby before getting a commit ACK. You should run at least two standbys, because otherwise the standby going down halts progress on the primary (after all, it cannot guarantee 2-safety if the one and only secondary is down).

But as for multi-master for scale-out, it's a no-go without some whacking. It may still be worth it, but it's definitely not The Best Thing about Postgres today.

The plain-old replication -- both synchronous and asychronous -- though, works wonderfully. If you just need read scale-out and can tolerate some staleness in results, usually measured in milliseconds (but certain circumstances can make it greater, and these are knowable/measurable in real time) then I think you'd be served pretty well.


You might take a look at Postgres-XC (http://postgres-xc.sourceforge.net/), and see if it meets your needs.


Oooh. That looks really nice, thank you very much for pointing me to that.


as already mentioned there's no in core multi master replication, but other then the already listed solutions, you can have look at :

http://postgres-xc.sourceforge.net/

the project is sponsored by Enterprisedb and NTT

a quote from the home page:

"Postgres-XC is an open source project to provide a write-scalable, synchronous multi-master, transparent PostgreSQL cluster solution. It is a collection if tightly coupled database components which can be installed in more than one hardware or virtual machines."

[edit: sorry for the noise I did't notice that jeff davis already comment about postgres-xc]


Original author here, after the original post there was so much great feedback that it was worthwhile to publish a part two - http://craigkerstiens.com/2012/05/07/why-postgres-part-2/

Additionally there will probably be a third post when Postgres 9.2 releases soon highlighting some of the great new features such as the JSON datatype.


I'd add "business risk" into the discussion. Mysql has an interesting road ahead of it, and there are certain people that just won't put open-source Oracle in their critical path.


Fortunately, there are at least two drop-in replacements, if Oracle should drop the ball. Percona Server, and MariaDB. Both are based on the well-vetted core of MySQL, and both are continuing to make huge strides in the capabilities and stability of MySQL.

Of course, Oracle hasn't dropped the ball yet; 5.5 was a good, solid release from them.


A few week ago I discovered Postgres Schemas (worst named feature ever -- nothing to do with SQL schema definitions). Basically, schemas allow you to have multiple databases within a single database. So, you only need to manage (or host/pay for) a single database but each company can have their own schema which gives you complete separation of data across all the tables.

These, together with the Apartment gem for Rails, makes building isolated multi-tenant applications really easy. It also makes migrated to a true multi-database/multi-server/sharded setup much simpler.


If you were using Oracle or MS SQL you'd be quite familiar with what Postgres calls a schema.

The SQL standard as I understand it is very vague on how schemas should be implemented beyond the information schema.


I don't value the underlying database as much anymore since I have been working with Rails apps. The database is just an API. I want to write my code in such a way that I could unplug POSTgres and plug in SQL Server tomorrow without skipping a beat. Using AREL you should largely be able to do that.


1. No app of significant size will actually allow this. If you've ever tried to switch over a large data set with a high query volume you'll know that it never "just works".

2. Just because there's an API behind it, doesn't mean that it doesn't matter which one you use. As a trivial example, Linux and OSX are both POSIX but it's hard to make an argument it doesn't matter which you use.


I bet after you switched your data over you wrote the conflicting SQL lines in a much more generic way.


No, the issue is not conflicting SQL. An ORM solves that problem extremely easily.

It's the fact that query planners and performance optimizations implemented by each database are different, and thus directly porting your current schema and queries doesn't work.


> No app of significant size will actually allow this.

Never heard of ORM ? In the Java world it is very, very common to have database independence.

And most enterprise deployments (i.e. significant size) are Java.


Of course I've heard of ORM.

Theoretical "Database independence" is easy to achieve: just follow the SQL standard. All your queries will execute. There's nothing java-specific about this at all. ORMs are extremely common across most languages.

Practical "Database independence" is very hard to achieve, because the query planners and performance optimizations differ between databases.


I agree with you. I'd like to expand a little:

The main people who can get a benefit from database independence are those who ship software to a client site to be executed. Imagine if you wrote some kind of accounting software package that you handed off to a corporation's IT group to manage: supporting more databases is probably more important than supporting more operating systems (mostly because the number of OSes requiring support has collapsed for most new applications). There will be installations running on different databases all the time.

The second most beneficial effect is if you are planning to change databases frequently (why?). Personally I don't think this is usually worth it for technical reasons, you may as well accept it'll suck when it comes time to do this and take whatever shortcuts make life easier and more correct before that. However, it can be useful as a hedge against your database management system vendor trying to play the role of an extortionate gangster or, alternatively, lying down and dying.

One thing I like about Postgres is that it is a project, and not a product. There is no overarching database vendor to extort you or die off suddenly: project death is conditional on a lack of interest -- both financial and personal -- in the project's maintenance and improvement. Truly, it will have to be completely obsolete in the marketplace to die. PGCon getting hit by a meteor would be a major setback, though.

The downside is that a project can't often make strategic or speculative investments in whizbang features: there has to be some gestalt agreement that a new feature is both worthwhile and implemented very well (both in terms of correctness and maintainability: the odds of you being able to ask another programmer in real time what is going on in a module are very low), and that means quite a bit of humming and hawing before committing to a feature or approach.

It's not for everyone: some whizbang feature other may enable your business grow fast enough and survive long enough to deal with the potential pitfalls of having a potentially extortionate or dead database vendor (the old generation: the usual proprietary-RDBMS suspects. Probably the new generation: database-implementation-by-wire only). But it's something to put on the scales, at least.


There's still some advantages to the database you choose. For example ActiveRecord will support the JSON datatype which will come with Postgres 9.2. While they may attempt to make this work in some form for other databases, the performance differences between the two will be quite vast.


The JSON data type is nothing more than a self validating varchar field i.e. fairly pointless.

Better off just validating the JSON yourself in the application layer and maintaining cross database compatibility. To be honest it is rare that you would even need to explicitly validate as when you deserialize into objects it will just fail at that point.


Sure but combined with plv8 you trivially can have indexes on the json


"The database is just an API."

APIs matter a lot. In my opinion, software engineering could almost be defined as the practice of designing good APIs.

I happen to think postgres, and it's brand of SQL, are great APIs. It has a lot to offer beyond a lowest-common-denominator API that works over any database.


> The database is just an API.

Tell me how that leaky abstraction is working for you when it sinks.


Oh, downvote for what would be a good business decision in most companies? (unless super high performance is necessary?)


Didn't downvote you but read the guidelines:

Resist complaining about being downmodded. It never does any good, and it makes boring reading.

http://ycombinator.com/newsguidelines.html


Good link, I'll not complain next time :)


Is Oracle's ownership of MySQL becoming an issue? I realize it's open source but still it seems like things have become to deteriorate slightly. Is that causing any alarm?


Yes and no. Oddly enough, Oracle has owned the most important part of mysql (innodb) for years, so them buying the rest of it was almost a benefit to have it all under one roof. They then went on to put out a very good release (5.5) that didn't suffer from the "don't use it till its in the teens" birthing pains that most previous releases did.

The thing is, most of the improvements in that release were really the work of a bunch of companies that had given up getting their patches accepted upstream and were trading around and maintaining their own patchsets. "The Google patches", "The Facebook patches", and "The Percona patches", etc. Mysql/Oracle brought their long stagnant release up to where the rest of the community had already gotten.

The nice thing is that means its already been proven that if they go dark and just squat on a stale code base for years, its not actually going to hold anything back.

The two things they have done are raise prices, and stop providing source code for new unit tests. Neither is a good thing, but they're pretty trivial in the overall scheme of things.


That isn't entirely fair to the InnoDB/MySql developers at Oracle - from what I've heard there has in general been an increase in development activity within Oracle. Some of that is merging patches from downstream, but most of the work has been on other improvements.

Performance in 5.6, for example, has improved a lot even over patched versions - e.g. https://www.facebook.com/notes/mysql-at-facebook/can-innodb-...


One thing I appreciate about using Django and its DRM is that it hardly matters what database I use.

If I'm writing SQL by hand I much prefer Postgres to MySQL since it is more standards conforming and has fewer random quirks. But given that Django generates the SQL for me, which database to use is largely a matter of ease of deployment, and on AWS that's MySQL (via RDS) so that's what I use.


Here's a link to the previous discussion http://news.ycombinator.com/item?id=3910743


To answer his question, yes virginia, we (people that use MySQL) use replication a lot. Especially at mid-grade levels of scaling where custom sharding and replication middleware hasn't necessitated itself yet but vertical scaling is out of the question.

I'd say the bigger issue with Postgres now is that middleware like http://code.google.com/p/vitess/ for it hasn't been deployed in the large yet. Most Postgres scaling anecdotes I've heard were:

"Well we put 64 gb of ram in the server and installed a RAID array of SSDs and stopped writing dumb unindexed queries."

Well that's just dandy, but what if my indices don't even fit in the ram of a single machine?


> I'd say the bigger issue with Postgres now is that middleware like http://code.google.com/p/vitess/ for it hasn't been deployed in the large yet.

That simply isn't true. Take a look at PgPool (http://www.pgpool.net/mediawiki/index.php/Main_Page) - it provides many of the same core features as Vitess, goes beyond to provide replication and flexible sharding/partitioning and is a good number of years more mature than Vitess.

PL/Proxy is similar middleware (though implemented as stored procedures) developed by and used at Skype for massive Pg partitioning.

There are dozens more packages like this at Pgfoundry (http://pgfoundry.org/). What specifically do you feel is missing?


>What specifically do you feel is missing?

A very large deployment of it that has shaken out the bugs.


You get the feeling that Postgres needs some marketing. It's a great piece of software but if people can't easily find out if it's been used at massive scale they aren't going to trust it.

Like it or not most deployments of MySQL are probably because people can easily hear about it being used for massive databases. The decision isn't based on technical merits (or lack of).


Skype doesn't count?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: