Using the relational model for app data in memory is really interesting.
Martin Fowler wrote about doing that as a way to get around the "object-relational mismatch" issue[1]. Richard Fabian describes "data-oriented design" as having a lot of overlap with the relational model[2]. ECSes becoming very popular in game engines are basically in-memory relational databases where "components" are "tables"[3].
No, components are columns. Each entity is a row in a table and an archetype (the set of all entities with the same components) is a table.
In ECS it's usual to store components together, this corresponds to columnar storage in database terms (or equivalently, it's as if data were in sixth normal form [0]). However in some systems you can opt to store a more traditional row-based column.
Thanks for the correction. In equating components and tables I was thinking of archetype-based engines, but even in that case they may store a whole set of components together in a single "table", right?
Conceptually I think you could equate components with relations. For example a "grid location" component could be imagined as a relation with "entity ID" and "coordinate" columns. I suppose that admits multiple component instances per entity ID though which is typically not what you want.
I assume with “components are columns” we don’t mean strictly single value columns. Small structs/aggregates could be quite common (like {from to} or {x y})?
This idea was raised years before Martin Fowler blogged about it. The tar pit paper linked in the article is from 2006 and the author's had been doing this year's prior to that.
Clojure(script) always seems to me to be this hotbed of interesting ideas in programming. I.e. you'll see something wild like this start here then eventually the concepts make their way out into regular JavaScript.
I'm almost starting to regret not picking Clojurescript for my app
Lots of stuff now established as "best practices" or whatever in frontend JavaScript applications comes from being made popular with ClojureScript development. Side-effects as data, hot reloading, keeping your app state in one place all were inspired from work happening in the ClojureScript community at the time JavaScript community "discovered" it.
The ClojureScript community obviously didn't come up with everything, many of the ideas are very old ideas for UI development but didn't really exist in the "web-sphere" before. I'm pretty sure I can remember Dan Abramov saying Redux was directly inspired by stuff happening in the ClojureScript community, particularly around atoms and hot-reloading. Also I think Pete Hunt mentioned stuff David Nolen was working on when talking about React as well initially, but less confident about this.
I wouldn't be surprised if Redux has cross-polination from ClojureScript, but in terms of non-Javascript languages that inspired Redux, the top of that list is probably Elm.
Either way, FP is slowly eating the web. "View as function of state" has thoroughly won the argument. What that will ultimately look like is unclear, but ignore it at your own peril.
It’s not just eating the web, * as function of state is generally the trend in many disciplines. So much so that people I knew who balked at the concept have since embraced it. And it is good.
Now we just have to get a handle on all the leaky stuff at the edges of every “functional core”, because there lay many dragons.
I want this to be true. And certainly there has been a lot of progress towards functional-ish code in the languages I'm about to reference.
In my experience, languages like C, Go, Java, C# still dominate the backend. Replacing JavaScript is such a simple and well-defined task that I expect it will be completed much sooner. Maybe only a decade or two.
Well I can’t speak to most of those, but I can definitely speak to JS: pretty much the only things holding back FP are people’s aversion to reduce and their aversion to grafting monads where they don’t fit. Otherwise it’s pretty much idiomatic to write functional-core code in JS basically everywhere. But again there be dragons at the edges.
Like I said. Functional-ish. Still way too many ThingDoers and other associated boilerplate to be actually interesting. A nice option though if you're working in a legacy codebase or your employer requires it.
You describe an effect as a data structure which gets interpreted and applied by the appropriate driver (for the lack of a better term).
Think of how we do it on the wire: we send a description of a event (command/effect) to an API server which routes/dispatches it to something that applies an effect. It’s not a function call, but a generic (implementation agnostic) description of intent in pure data.
Is it kind of like events getting emitted and handled? I'm just a little fuzzy on what's creating the data/request/effect, and then where it's sending that once it's created, and then where the "drivers" would come from and how they would get invoked.
> where it's sending that once it's created, and then where the "drivers" would come from and how they would get invoked
Depends on entirely on how you'd do it.
You can apply the Functional Core Imperative Shell architecture, where side-effecting, stateful code (imperative shell) always calls the functional code (functional core). The shell basically handles files, db connections, HTTP/TCP, exceptions, retries etc. and basically asks the functional core of what to do by providing the data that comes out of those things.
For example there's no reason a HTTP routing library has to do IoC. It can also be structured in a way so you you give it a path and it returns you data, such as an event description, a set of questions or a command description etc.
There are also frameworks that work this way, for example the UI state management framework re-frame. It handles the side effects for you and calls your functions that you register on certain UI events. The re-frame documentation is very good at explaining this step by step.
The side effect here is to a stop a running ticker on the page, but the `reg-event-fx` function just returns a map (data) where the first key is :db (which is the new app state, which is like a db, can even have a sorta schema over it) and the second key is the actual even :stop-ticker, which takes an argument handle (the handle of the ticker to stop).
The event handler is just describing the side effect that is to occur by returning the map.
BUt as far as testing goes, you can test the event handler and test that it returns the expected map. The framework is responsible for actually executing the event that is described the event handler and it does it via the name :stop-ticker. Your event handler itself can remain a pure function.
Instead of doing the side effect you return a description and let the framework handle it. E.g. you don't call `transact(db, data)` but return `["transact", db, data]`. Now your function is pure. The framework will expect you to provide the transact handler, e.g. as `register_handler("transact", (db, data) => {..})`.
Personally I'm not a fan of this abstraction (another layer of indirection). I'd rather take a `transact` function as an argument.
> Mozart/Oz [...] a bit of a tragedy it has been mostly abandoned
Agreed. But wow the documentation was a mess. So much language progress these last decades has been around increasing minimum expectations for ecosystem.
Poplog (integrated CommonLisp, prolog, ML, an a C-like; 1980's) is another on my list of roads regrettably not taken. Killed by commercialization. Which also zombied CL.
Basically any Expert System Shell in Lisp in the 80s/90s was a multi-paradigm programming system (ART, KEE, KnowledgeCraft, KnowledgeWorks, Babylon, and many others). There were also a bunch of functional/relational languages like Relfun, AP5 in Lisp. There are/were also a multitude logic/relational languages in Lisp.
If you had a unix, and didn't have several hundred to thousand (adjusted) dollars, in the mid to late 1980's and early 1990's, there was... very little. Gcc and CMUCL eventually existed and later became usable. My very fuzzy recollection is Poplog was available early on and cheapish but barriered (have your academic department negotiate with ours for a site license), then commercial. Over those years, I repeatedly searched for an environment to live in, and repeatedly came up empty. And repeatedly thought: Poplog could own this space, could be the obvious no-competition language choice for non-commercial non-proprietary development... but is trading that potentially massive impact for unicorn dreams and subsistence funding. Imagine a different mid 1990's, with gcc, and then python and perl and C++, struggling TCL-like to gain traction against a widespread active accessible portable powerful poplog/CL/ML tooling and community.
Wow that paper has a cool footnote about the “empty paradigm”:
> Of course, many of these paradigms are useless in practice, such as the empty paradigm (no concepts)[1] or paradigms with only one concept.
> [1]: Similar reasoning explains why Baskin-Robbins has exactly 31 flavors of ice cream. We postulate that they have only 5 flavors, which gives 2^5 − 1 = 31 combinations with at least one flavor. The 32nd combination is the empty flavor. The taste of the empty flavor is an open research question.
This looks really interesting! As someone who has worked with relational databases and Clojure in the past, I can definitely see the appeal of a functional relational programming model.
I like that relic provides support for declarative data processing and declarative relational constraints. These are areas that can be tricky to handle when working with traditional relational databases, so it's great to see a library that addresses these pain points.
The ability to use relic with reactive programming is also a big plus. I'm curious to see how this would work in practice, particularly in the context of an interactive application.
Overall, relic seems like a promising library for anyone looking to work with normalized data in Clojure. I'll definitely be giving it a try on my next project!
Dan recently recorded a session about Relic, if you'd like to hear him speak more about the origins of the library, some design choices, and some examples:
I guess sqlite + honeysql would be an alternative. Curious to know why the author prefers the relations/table/codd model over graph-map-databases like datomic and thinks something like datomic "feels out" of "out of the tarpit".
Interesting. What are you thoughts in regards to Hickey's comments about the structural rigidity of relations/tables to represent information on how they impede flexibility, make your program hard to change over time and increase complexity.
He mentions it in various videos but these snippets are two quick finds:
I'm not the OP, but thinking well beyond the original topic of in-memory reactive programming (where my answer would be quite different) to the world of long-lived durable databases... one perspective to consider is that any system built around N-ary relations can automatically benefit from the full range of relational algebra for transforming and composing both base data and derived relations. The flexibility of N-ary relations is largely what has kept SQL databases relevant despite the flaws of SQL itself.
In contrast, a system that only handles base data in terms of triples assumes that you have a perfect attribute-oriented information model figured out upfront. But given this is rarely the case users will want tools that help them to easily transform/migrate their data and schema over time. Ideally this takes the form of a declarative language that minimises the amount of code that needs to be written. However, without a compelling end-to-end transformation language figured out I think any alternative database systems with their alternative information models (triple-based or otherwise) will struggle to compare favourably with mainstream databases, where declarative data munging with SQL is considered valuable and routine.
Triples may well prove to be the best way to handle information in software over the long-term, but I'm not sure that the systems which currently work with triples are good enough or widespread enough to test that theory.
In contrast, a system that only handles base data in terms of triples assumes that you have a perfect attribute-oriented information model figured out upfront. But given this is rarely the case users will want tools that help them to easily transform/migrate their data and schema over time.
Curious, where do you think datomic and datalog fall short (assuming one is using clojure and datomic's performance is tolerable)? By "perfect attribute-oriented model" I assume you are referring to the database's information model and support for that model, not to domain modeling with triples. Triples and relations are equally flexible but which models require more effort to maintain and transform along side with your application?
I think Hickey's point is that the structurally rigid nature of relations and stuff like arbitrary join tables needed to created many-to-many relations is that they get hard-coded throughout your application making your applications very hard to change over time, forcing you to put extreme effort to provide a set of logical views to isolate the application from the physical structural decisions. He says 'the more structural components(tables/intersection tables and having to name them(places)) you have in your model the more rigidity you get in your applications, but as you mentioned, triples haven't prove themselves.
I'm not experienced enough to understand all the tradeoffs here and can't tell if Hickey is wearing a salesman hat ;)
Of course, databases like postgres would be my first choice for most projects (even if using clojure which I like but is still a hard sell for webapps), as you said, they have too much going for them (flexibility of relational algebra, SQL, they are well understood, widespread, great ahd well supported implementations, etc..)
To be clear I'm not experienced enough to judge all these tradeoffs properly either, and I've never worked with Datomic. I suppose my main point is that an ideal database UX would avoid having to write processing/transformation code that needs to run outside the database (Clojure is pretty great, but still). This vision probably runs counter to the 'deconstructed' design of Datomic which heavily emphasises the usage of Clojure around it, but that could possibly be reconciled. It may well even barely register as a significant gap for current users in practice - I have no real idea :)
> provide a set of logical views to isolate the application from the physical structural decisions
To me this feels like the holy grail for what people really want from databases (beyond the basics of transactions/durability), and triples definitely still hold promise as a helpful abstraction. However I suspect Incremental View Maintenance (as Relic discusses) has a potentially even bigger role to play here.
I think this is a deep battle-of-the-approaches topic - whether it's better to "rewrite history" or deal with the different shape of past data. But in the history preserving case the data model doesn't need to be perfect up front, you can approach the data in your application in a way that takes into account the evolution. Eg in Datomic there are practices that support this [1] [2].
Of course you can have dev time scratch environments to play with stuff without to support
Beautiful. I've been noodling on exactly this same idea. So far I've explored using SQLite but it would be ideal if there was no SQL between me and my relations. Another direction I've taken is adding STM to JavaScript and implementing a JS api for querying relations.
How are updates triggered, what's the mechanism? From the examples it seems like you don't need to subscribe and provide a function to be called when changes happen, but then how does it work?
A relic db is a persistent data-structure [1]. Applying a transaction with rel/transact gives you a new database, rel/track-transact also returns the changes to relations you have opted-in to change tracking (using rel/watch).
Funnyduck99 is probably mocking all the people that go into HN posts about Clojure to complain about how their team of non-Clojure programmers went to work on a Clojure project and it was a bad experience.
No I genuinely don't like clojure because I just finished my first clojure class, it was also my first functional language and it was online and i am bad at learning online so thats why.
I hope you get an opportunity to explore it (FP, whether Clojure or otherwise) in a more conducive environment. It sounds like this wasn’t the best learning environment for you, and that’s totally valid, but there’s a lot of good stuff to learn if you’re in an environment that suits you.
LPT: While learning Clojure, the following (almost always true) mental-model help me massively at "getting" Clojure.
"It's Maps All the Day Down"
Spend a lot of time, just learning how you (CRUD) map contents.
There will be enough time to tackle the other cases/tech (atoms, protocols etc) but until you get good at maps don't get bogged down by the other cool stuff.
In reality - its actually Trees all the way down. But, because you don't have proper structures in Clojure, one uses Maps when one should actually be using Trees.
We (professional developers using lisp-like languages daily) don't have the reaction of "ewww" as soon as we see something foreign, but instead we think "hmm, that looks different, I wonder why it looks different?" and then we start to try to understand it. Then you give a lisp-like language a try (Clojure in my case) and suddenly it's really hard to program in any other language, because they're not as good as a lisp language.
If you have a knee-jerk reaction to everything that looks different, you'll lose out on lots of fun stuff. You should give lisp a try, I'm sure you won't regret it.
>If you have a knee-jerk reaction to everything that looks different, you'll lose out on lots of fun stuff. You should give lisp a try, I'm sure you won't regret it.
Yes I always tell ppl, sure you don't need to like LISP but as a serious CS professional, AT LEAST give it a real HONEST go! Then if you still don't like it, cool, move on !
There is an active and lively and long-lived LISP-Like community even today, there is a high probability there is something there.
Grandparent programs in Clojure which doesn't use car and cdr.
ANSI Common Lisp has first and rest as synonyms of car and cdr. Books from the middle 1980s on the then emerging Common Lisp already cover this.
It's like you're griping about an issue that was closed before CVS existed, let alone git.
car and cdr still exist because of backward compatibility and because they are good names for when cons cell structure is just arbitrary structure and not a list. The car is not always the first item of a list, and cdr is not always the continuation of a list. These words are deeply entrenched in the Lisp culture. The words have no confusing associations with anything else but the parts of a cons cell. I've not come across any "car" or "cdr" usage in computing anywhere, except Lisp-inspired jokes like in the "Locator/Identifier Separation Protocol". In the telecomm industry, CDR stands for "call detail record", but that's removed from programming language and data structuring.
Knuth defined some binary-cell tree structures in TAOCP. He used the words alink and blink (as in A link, B link). Those words are not bad.
It's good if the parts of a flexible data structure that programs use for representing all kinds of things have names that don't have any confusing associations. When you see those names, you know they are just literally about that structure, and you can see from how those words are used what the shape of the structure is.
CAR and CDR have origins in a hardware feature of the IBM 704. But they came into Lisp via Fortran. The list processing in Lisp was closely inspired by FLPL: the Fortran List Programming Language: a system for linked list manipulation done in Fortran. FLPL had functions like XCARF and XCDRF and others. MacCarthy greatly improved the naming by dropping the gratuitous X...F noise. (What had they been thinking?)
I like the structure that S-expressions convey. Must be something with how my brain works. It's probably also why I find Python code utterly unreadable.
In most cases, an imperative example in a curly-bracket language (JavaScript as one example) has the same number of parens and curly brackets as a Lisp has in parens. They're just in different places.
And, the non-Lisp languages tend to have more rat droppings => , ; . etc.
f is a function which takes two arguments, adds them, and returns the result.
For me much of the appeal of the syntax comes from using it with a editor integrated REPL, or when writing macros, or when generating linter configuration from a domain model etc.
However I agree with you on some level. Lisp code can easily be written in a way that’s hard to read. Specifically because of nesting expression way more often than is comfortable for me to read. I much rather prefer let bindings and threading so most of the code reads from left to right.
I totally agree with this, but my love of Lisps wasn't instantaneous. When it clicked though — which required some effort — and I could appreciate the elegance of the language, I didn't want to use anything else.
Regarding Python, I've always disliked its functional whitespace and OO inclination. Given its popularity, though, I'm clearly a minority!
To each their own! I feel the same way about Algol family languages, too many arbitrary design decisions that have been internalised for decades at this point for my liking.
Martin Fowler wrote about doing that as a way to get around the "object-relational mismatch" issue[1]. Richard Fabian describes "data-oriented design" as having a lot of overlap with the relational model[2]. ECSes becoming very popular in game engines are basically in-memory relational databases where "components" are "tables"[3].
[1]: https://martinfowler.com/bliki/OrmHate.html
[2]: https://dataorienteddesign.com/dodbook/
[3]: https://github.com/SanderMertens/ecs-faq#what-is-ecs