Hacker News new | past | comments | ask | show | jobs | submit login

I don't care so much about optimized tight loops written in assembly, rather about the ability to scale nowadays. Datomic uses databases like you describe e.g. Postgres as its storage. So it would be foolish to say that it beats their performance on a bare bones level. Instead, Datomics architecture and information model make it much easier and significantly reduce the overhead to design and implement applications that provide insane performance. I won't argue that you can hand rewrite every Datomic application in its underlying storage database and get more performance out of it if you do your caching and coordination right. It will take you much longer though (I'd guess a tenfold at least), likely have some very difficult to find bugs, and the result won't be as easy to extend. With Datomic, I get memory speed performance out of the box for the heavy hitters and so much more that it take would take some very uncommon requirements for me to choose something else nowadays.



Well I would buy the developer productivity argument but most applications have a mix of reporting requirements that are generally extremely hard to implement on anything "distributed". Also in a distributed system you are either running some consensus algorithm (paxos, RAFT etc) that will def. not be "insane performance" or you it will have issues with consistency.


Datomic uses distributed storage, writes are coordinated by a single instance (transactor). Reads don't block writers and immutability allows to query consistent snapshots. Does that address your concern?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: