Hacker News new | past | comments | ask | show | jobs | submit login

* By "restrictive schemas" I mean being forced to represent your data storage in non-optimal ways – like not being able to have nested objects in a first-class way. Schemas themselves are extremely important, and they should be as tight as possible.

* Rama's JVM-based, so the entire ecosystem is available to you. You can represent data as primitive types, Java objects, Protobuf, Clojure records, etc.

* You deploy and manage your own Rama clusters. The number of nodes / instance types depends on the app, but Rama doesn't use more resources than traditional architectures combining multiple tools.

* Some databases support multiple very specific data models (e.g. Redis). I don't consider that flexible compared to Rama, which allows for arbitrary combinations of arbitrarily sized data structures of arbitrary partitioning.

* Depots (the "event sourcing" part of Rama) can be optionally trimmed. So you can configure it to only keep the last 2M entries per partition, for example. Some applications need this, while others don't.

* If you're adding a new PState, it's up to you how far back in the depot to start. For example, you could say "start from events appended after a specific timestamp" or "start from 10M records ago on each partition".

* We have a first-class PState migrations feature coming very soon. These migrations are lazy, so there's no downime. Basically you can specify a migration function at any level of your PStates, and the migration functions are run on read. In the background, it migrates iterates over the PState to migrate every value on disk (throttled so as not to use too many resources).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: