Hacker News new | past | comments | ask | show | jobs | submit login
Relic: Functional relational programming for Clojure(Script) (github.com/wotbrew)
224 points by zonotope on Feb 26, 2023 | hide | past | favorite | 93 comments



Using the relational model for app data in memory is really interesting.

Martin Fowler wrote about doing that as a way to get around the "object-relational mismatch" issue[1]. Richard Fabian describes "data-oriented design" as having a lot of overlap with the relational model[2]. ECSes becoming very popular in game engines are basically in-memory relational databases where "components" are "tables"[3].

[1]: https://martinfowler.com/bliki/OrmHate.html

[2]: https://dataorienteddesign.com/dodbook/

[3]: https://github.com/SanderMertens/ecs-faq#what-is-ecs


> "components" are "tables"

No, components are columns. Each entity is a row in a table and an archetype (the set of all entities with the same components) is a table.

In ECS it's usual to store components together, this corresponds to columnar storage in database terms (or equivalently, it's as if data were in sixth normal form [0]). However in some systems you can opt to store a more traditional row-based column.

[0] https://en.wikipedia.org/wiki/Database_normalization#Satisfy...


Thanks for the correction. In equating components and tables I was thinking of archetype-based engines, but even in that case they may store a whole set of components together in a single "table", right?

Conceptually I think you could equate components with relations. For example a "grid location" component could be imagined as a relation with "entity ID" and "coordinate" columns. I suppose that admits multiple component instances per entity ID though which is typically not what you want.


I assume with “components are columns” we don’t mean strictly single value columns. Small structs/aggregates could be quite common (like {from to} or {x y})?


Yes


> #Satisfying_6NF

6th Normal Form? I was taught that 3NF should be enough for all intents and purposes, which now sounds like the quote about 640k memory.


I'd love to see something like Relic able to operate over capnproto/arrow/parquet/avro/hdf5 etc along with a DAG based recomputation engine.


This idea was raised years before Martin Fowler blogged about it. The tar pit paper linked in the article is from 2006 and the author's had been doing this year's prior to that.


Clojure(script) always seems to me to be this hotbed of interesting ideas in programming. I.e. you'll see something wild like this start here then eventually the concepts make their way out into regular JavaScript.

I'm almost starting to regret not picking Clojurescript for my app


Lots of stuff now established as "best practices" or whatever in frontend JavaScript applications comes from being made popular with ClojureScript development. Side-effects as data, hot reloading, keeping your app state in one place all were inspired from work happening in the ClojureScript community at the time JavaScript community "discovered" it.

The ClojureScript community obviously didn't come up with everything, many of the ideas are very old ideas for UI development but didn't really exist in the "web-sphere" before. I'm pretty sure I can remember Dan Abramov saying Redux was directly inspired by stuff happening in the ClojureScript community, particularly around atoms and hot-reloading. Also I think Pete Hunt mentioned stuff David Nolen was working on when talking about React as well initially, but less confident about this.


I wouldn't be surprised if Redux has cross-polination from ClojureScript, but in terms of non-Javascript languages that inspired Redux, the top of that list is probably Elm.

See https://redux.js.org/understanding/history-and-design/prior-... which lists Elm right under Flux.


Either way, FP is slowly eating the web. "View as function of state" has thoroughly won the argument. What that will ultimately look like is unclear, but ignore it at your own peril.


It’s not just eating the web, * as function of state is generally the trend in many disciplines. So much so that people I knew who balked at the concept have since embraced it. And it is good.

Now we just have to get a handle on all the leaky stuff at the edges of every “functional core”, because there lay many dragons.


I want this to be true. And certainly there has been a lot of progress towards functional-ish code in the languages I'm about to reference.

In my experience, languages like C, Go, Java, C# still dominate the backend. Replacing JavaScript is such a simple and well-defined task that I expect it will be completed much sooner. Maybe only a decade or two.


Well I can’t speak to most of those, but I can definitely speak to JS: pretty much the only things holding back FP are people’s aversion to reduce and their aversion to grafting monads where they don’t fit. Otherwise it’s pretty much idiomatic to write functional-core code in JS basically everywhere. But again there be dragons at the edges.


Java and C# can absolutely be written in an FP style and it is definitely winning grounds.


Like I said. Functional-ish. Still way too many ThingDoers and other associated boilerplate to be actually interesting. A nice option though if you're working in a legacy codebase or your employer requires it.


> Now we just have to get a handle on all the leaky stuff at the edges of every “functional core”, because there lay many dragons

My gut is telling me erlang might have tackled this.


Don't worry. It didn't.


surely "reducer" came from early FP leaks in mainstream


Can you give an example of what you mean by side effects as data?


You describe an effect as a data structure which gets interpreted and applied by the appropriate driver (for the lack of a better term).

Think of how we do it on the wire: we send a description of a event (command/effect) to an API server which routes/dispatches it to something that applies an effect. It’s not a function call, but a generic (implementation agnostic) description of intent in pure data.

It’s like that but inside your program.


Is it kind of like events getting emitted and handled? I'm just a little fuzzy on what's creating the data/request/effect, and then where it's sending that once it's created, and then where the "drivers" would come from and how they would get invoked.


> what's creating the data/request/effect

Just a function.

> where it's sending that once it's created, and then where the "drivers" would come from and how they would get invoked

Depends on entirely on how you'd do it.

You can apply the Functional Core Imperative Shell architecture, where side-effecting, stateful code (imperative shell) always calls the functional code (functional core). The shell basically handles files, db connections, HTTP/TCP, exceptions, retries etc. and basically asks the functional core of what to do by providing the data that comes out of those things.

For example there's no reason a HTTP routing library has to do IoC. It can also be structured in a way so you you give it a path and it returns you data, such as an event description, a set of questions or a command description etc.

There are also frameworks that work this way, for example the UI state management framework re-frame. It handles the side effects for you and calls your functions that you register on certain UI events. The re-frame documentation is very good at explaining this step by step.


SOmething like this:

      (rf/reg-event-fx
        :stop-timer
        (fn [{:keys [db]} _]
           (let [handle (db :ticker-handle)]
              {:db (assoc db :ticker-handle nil)
              :stop-ticker [handle]})))
The side effect here is to a stop a running ticker on the page, but the `reg-event-fx` function just returns a map (data) where the first key is :db (which is the new app state, which is like a db, can even have a sorta schema over it) and the second key is the actual even :stop-ticker, which takes an argument handle (the handle of the ticker to stop).

The event handler is just describing the side effect that is to occur by returning the map.


ok so would it be correct to say this is sort of like handling an event that gets emitted?

Sorry, I know no clojurescript, so I'm having a hard time parsing it even googling the syntax


Yes, exactly.

There is a piece of code which emits the event :stop-timer, that stops a js timer in the window.

      :on-click (fn [_](rf/dispatch-sync [:stop-timer]))
In clojure a map has the syntax

    {:key :value}
So this event handler is returning a map which describes the event.

There is then a further function which does the actual event

    (rf/reg-fx
     :stop-ticker
     (fn [[handle]]
        (when (not (nil? handle))
          (js/clearInterval handle))))
BUt as far as testing goes, you can test the event handler and test that it returns the expected map. The framework is responsible for actually executing the event that is described the event handler and it does it via the name :stop-ticker. Your event handler itself can remain a pure function.


Instead of doing the side effect you return a description and let the framework handle it. E.g. you don't call `transact(db, data)` but return `["transact", db, data]`. Now your function is pure. The framework will expect you to provide the transact handler, e.g. as `register_handler("transact", (db, data) => {..})`.

Personally I'm not a fan of this abstraction (another layer of indirection). I'd rather take a `transact` function as an argument.


While I love Clojure, these ideas already existed in other languages. For example, Mozart/Oz basically integrates all major paradigms [1].

It's a bit of a tragedy it has been mostly abandoned. I wish a Lisp, such as Clojure, emulated Mozart/Oz semantics.

[1] https://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf


> Mozart/Oz [...] a bit of a tragedy it has been mostly abandoned

Agreed. But wow the documentation was a mess. So much language progress these last decades has been around increasing minimum expectations for ecosystem.

Poplog (integrated CommonLisp, prolog, ML, an a C-like; 1980's) is another on my list of roads regrettably not taken. Killed by commercialization. Which also zombied CL.


Basically any Expert System Shell in Lisp in the 80s/90s was a multi-paradigm programming system (ART, KEE, KnowledgeCraft, KnowledgeWorks, Babylon, and many others). There were also a bunch of functional/relational languages like Relfun, AP5 in Lisp. There are/were also a multitude logic/relational languages in Lisp.


If you had a unix, and didn't have several hundred to thousand (adjusted) dollars, in the mid to late 1980's and early 1990's, there was... very little. Gcc and CMUCL eventually existed and later became usable. My very fuzzy recollection is Poplog was available early on and cheapish but barriered (have your academic department negotiate with ours for a site license), then commercial. Over those years, I repeatedly searched for an environment to live in, and repeatedly came up empty. And repeatedly thought: Poplog could own this space, could be the obvious no-competition language choice for non-commercial non-proprietary development... but is trading that potentially massive impact for unicorn dreams and subsistence funding. Imagine a different mid 1990's, with gcc, and then python and perl and C++, struggling TCL-like to gain traction against a widespread active accessible portable powerful poplog/CL/ML tooling and community.


Waldek Hebisch, who is the maintainer for the Fricas computer algebra software, also maintains a 64-bit version of Poplog for amd64/x86_64:

https://github.com/hebisch/poplog https://www.cs.bham.ac.uk/research/projects/poplog/freepoplo...


Holy crap, that figure 2, showing all the programming paradigms and how they relate to each other is incredible.


The extended version of this is the book Concepts, Techniques, and Models of Computer Programming [1], which is just incredibly good.

There's also a summary poster [2].

Actually, CTM was in Rich Hickey's reading list when he designed Clojure [3].

[1] https://www.info.ucl.ac.be/~pvr/book.html

[2] https://www.info.ucl.ac.be/~pvr/paradigmsDIAGRAMeng201.pdf

[3] https://www.goodreads.com/list/show/137472.Rich_Hickey_s_Clo...


Thank you for the recommendation!


Wow that paper has a cool footnote about the “empty paradigm”:

> Of course, many of these paradigms are useless in practice, such as the empty paradigm (no concepts)[1] or paradigms with only one concept.

> [1]: Similar reasoning explains why Baskin-Robbins has exactly 31 flavors of ice cream. We postulate that they have only 5 flavors, which gives 2^5 − 1 = 31 combinations with at least one flavor. The 32nd combination is the empty flavor. The taste of the empty flavor is an open research question.


I don’t remember whether it was Rod or Todd, but “unflavored [iced milk] for me!” was in fact a Simpsons gag.


Link looks great; thank you.


A big part of the curse of lisp is that anything feels possible. Even for one guy who works at a veg shop.


There’s nothing wrong with it. It’s wrong not pursuing once get that feeling. Worse outcome is a much better understanding


Worse outcome is wasting time tying your future to a language that will die soon, with few people you could hire to help.

But the rewards are in line with the risks.


Geez, it's a programming language not a spouse! You can just, you know, use another one if it doesn't work out.


Yep! I’ve been years parted from Clojure(Script) and I’m still glad we had those times we shared together.


I don't see any macro usage in this project, what part of clojure made you feel anything is possible for this project?


This looks really interesting! As someone who has worked with relational databases and Clojure in the past, I can definitely see the appeal of a functional relational programming model.

I like that relic provides support for declarative data processing and declarative relational constraints. These are areas that can be tricky to handle when working with traditional relational databases, so it's great to see a library that addresses these pain points.

The ability to use relic with reactive programming is also a big plus. I'm curious to see how this would work in practice, particularly in the context of an interactive application.

Overall, relic seems like a promising library for anyone looking to work with normalized data in Clojure. I'll definitely be giving it a try on my next project!


Dan recently recorded a session about Relic, if you'd like to hear him speak more about the origins of the library, some design choices, and some examples:

https://youtube.com/watch?v=QsEJ5O2e4Es


I like how the typical JS-library-style README is put separately in a 'Pitch' section.


I guess sqlite + honeysql would be an alternative. Curious to know why the author prefers the relations/table/codd model over graph-map-databases like datomic and thinks something like datomic "feels out" of "out of the tarpit".


Mainly it is because the tar pit paper proposed the relational algebra in its functional relational programming.

I do think for some it might be easier to reason mechanically about dataflow through collection-oriented operators. But I suppose it's subjective.

clj-3df [1] to my understanding does something like Relic for datomic datalog using differential dataflow.

Disclaimer: I now work on XTDB [2], so datalog is somewhat now my day job.

[1] https://github.com/sixthnormal/clj-3df

[2] https://xtdb.com/


Interesting. What are you thoughts in regards to Hickey's comments about the structural rigidity of relations/tables to represent information on how they impede flexibility, make your program hard to change over time and increase complexity.

He mentions it in various videos but these snippets are two quick finds:

- https://youtu.be/Pz_NvY1kw6I?t=489

- https://youtu.be/thpzXjmYyGk?t=255


I'm not the OP, but thinking well beyond the original topic of in-memory reactive programming (where my answer would be quite different) to the world of long-lived durable databases... one perspective to consider is that any system built around N-ary relations can automatically benefit from the full range of relational algebra for transforming and composing both base data and derived relations. The flexibility of N-ary relations is largely what has kept SQL databases relevant despite the flaws of SQL itself.

In contrast, a system that only handles base data in terms of triples assumes that you have a perfect attribute-oriented information model figured out upfront. But given this is rarely the case users will want tools that help them to easily transform/migrate their data and schema over time. Ideally this takes the form of a declarative language that minimises the amount of code that needs to be written. However, without a compelling end-to-end transformation language figured out I think any alternative database systems with their alternative information models (triple-based or otherwise) will struggle to compare favourably with mainstream databases, where declarative data munging with SQL is considered valuable and routine.

Triples may well prove to be the best way to handle information in software over the long-term, but I'm not sure that the systems which currently work with triples are good enough or widespread enough to test that theory.


In contrast, a system that only handles base data in terms of triples assumes that you have a perfect attribute-oriented information model figured out upfront. But given this is rarely the case users will want tools that help them to easily transform/migrate their data and schema over time.

Curious, where do you think datomic and datalog fall short (assuming one is using clojure and datomic's performance is tolerable)? By "perfect attribute-oriented model" I assume you are referring to the database's information model and support for that model, not to domain modeling with triples. Triples and relations are equally flexible but which models require more effort to maintain and transform along side with your application?

I think Hickey's point is that the structurally rigid nature of relations and stuff like arbitrary join tables needed to created many-to-many relations is that they get hard-coded throughout your application making your applications very hard to change over time, forcing you to put extreme effort to provide a set of logical views to isolate the application from the physical structural decisions. He says 'the more structural components(tables/intersection tables and having to name them(places)) you have in your model the more rigidity you get in your applications, but as you mentioned, triples haven't prove themselves.

I'm not experienced enough to understand all the tradeoffs here and can't tell if Hickey is wearing a salesman hat ;)

Of course, databases like postgres would be my first choice for most projects (even if using clojure which I like but is still a hard sell for webapps), as you said, they have too much going for them (flexibility of relational algebra, SQL, they are well understood, widespread, great ahd well supported implementations, etc..)


To be clear I'm not experienced enough to judge all these tradeoffs properly either, and I've never worked with Datomic. I suppose my main point is that an ideal database UX would avoid having to write processing/transformation code that needs to run outside the database (Clojure is pretty great, but still). This vision probably runs counter to the 'deconstructed' design of Datomic which heavily emphasises the usage of Clojure around it, but that could possibly be reconciled. It may well even barely register as a significant gap for current users in practice - I have no real idea :)

> provide a set of logical views to isolate the application from the physical structural decisions

To me this feels like the holy grail for what people really want from databases (beyond the basics of transactions/durability), and triples definitely still hold promise as a helpful abstraction. However I suspect Incremental View Maintenance (as Relic discusses) has a potentially even bigger role to play here.


I think this is a deep battle-of-the-approaches topic - whether it's better to "rewrite history" or deal with the different shape of past data. But in the history preserving case the data model doesn't need to be perfect up front, you can approach the data in your application in a way that takes into account the evolution. Eg in Datomic there are practices that support this [1] [2].

Of course you can have dev time scratch environments to play with stuff without to support

[1] https://docs.datomic.com/on-prem/best-practices.html?search=...

[2] https://docs.datomic.com/on-prem/best-practices.html?search=...


Beautiful. I've been noodling on exactly this same idea. So far I've explored using SQLite but it would be ideal if there was no SQL between me and my relations. Another direction I've taken is adding STM to JavaScript and implementing a JS api for querying relations.


How are updates triggered, what's the mechanism? From the examples it seems like you don't need to subscribe and provide a function to be called when changes happen, but then how does it work?


A relic db is a persistent data-structure [1]. Applying a transaction with rel/transact gives you a new database, rel/track-transact also returns the changes to relations you have opted-in to change tracking (using rel/watch).

[1] https://en.wikipedia.org/wiki/Persistent_data_structure


I dont like clojure


Funnyduck99 is probably mocking all the people that go into HN posts about Clojure to complain about how their team of non-Clojure programmers went to work on a Clojure project and it was a bad experience.


No I genuinely don't like clojure because I just finished my first clojure class, it was also my first functional language and it was online and i am bad at learning online so thats why.


Sounds like any functional language would have disappointed in that case?


yup thats what i said


I hope you get an opportunity to explore it (FP, whether Clojure or otherwise) in a more conducive environment. It sounds like this wasn’t the best learning environment for you, and that’s totally valid, but there’s a lot of good stuff to learn if you’re in an environment that suits you.


LPT: While learning Clojure, the following (almost always true) mental-model help me massively at "getting" Clojure.

"It's Maps All the Day Down"

Spend a lot of time, just learning how you (CRUD) map contents.

There will be enough time to tackle the other cases/tech (atoms, protocols etc) but until you get good at maps don't get bogged down by the other cool stuff.


"It's Maps All the Day Down"

In reality - its actually Trees all the way down. But, because you don't have proper structures in Clojure, one uses Maps when one should actually be using Trees.


Lol talk about missing "the forest for the trees".


That’s a much more interesting comment than the top level one. People generally care about learning experiences and explicitly subjective statements.


What makes people see Lisp-like languages and think to themselves "yep, this is how I want my code to look like" is beyond me


We (professional developers using lisp-like languages daily) don't have the reaction of "ewww" as soon as we see something foreign, but instead we think "hmm, that looks different, I wonder why it looks different?" and then we start to try to understand it. Then you give a lisp-like language a try (Clojure in my case) and suddenly it's really hard to program in any other language, because they're not as good as a lisp language.

If you have a knee-jerk reaction to everything that looks different, you'll lose out on lots of fun stuff. You should give lisp a try, I'm sure you won't regret it.


Lisp only looks "different" if you've gotten used to C, Java, Perl or whatever as your "baseline normal".

Unless you carefully study the reactions of nonprogrammers, you are biased.

To a nonprogramming person, this is quite probably "different"

  (p->arr[x].pfn(y)>>SHIFT)&MASK


^^^ That is like someone went back in time and gave the Egyptians a typewriter but only installed "The Windings Font" :)

The magic of S-Expr is like drug-addiction... someday you wakeup and realise... damnn I can't go without S-Expr in my daily-driver-languages :/


>If you have a knee-jerk reaction to everything that looks different, you'll lose out on lots of fun stuff. You should give lisp a try, I'm sure you won't regret it.

Yes I always tell ppl, sure you don't need to like LISP but as a serious CS professional, AT LEAST give it a real HONEST go! Then if you still don't like it, cool, move on !

There is an active and lively and long-lived LISP-Like community even today, there is a high probability there is something there.

YMMV :)


But is there any need to make things different for the sake of difference?

For example, why not name "car" function as "first", "head" or "beginning"? Why not name "cdr" as "rest", "tail", "end" or "back"?


Grandparent programs in Clojure which doesn't use car and cdr.

ANSI Common Lisp has first and rest as synonyms of car and cdr. Books from the middle 1980s on the then emerging Common Lisp already cover this.

It's like you're griping about an issue that was closed before CVS existed, let alone git.

car and cdr still exist because of backward compatibility and because they are good names for when cons cell structure is just arbitrary structure and not a list. The car is not always the first item of a list, and cdr is not always the continuation of a list. These words are deeply entrenched in the Lisp culture. The words have no confusing associations with anything else but the parts of a cons cell. I've not come across any "car" or "cdr" usage in computing anywhere, except Lisp-inspired jokes like in the "Locator/Identifier Separation Protocol". In the telecomm industry, CDR stands for "call detail record", but that's removed from programming language and data structuring.

Knuth defined some binary-cell tree structures in TAOCP. He used the words alink and blink (as in A link, B link). Those words are not bad.

It's good if the parts of a flexible data structure that programs use for representing all kinds of things have names that don't have any confusing associations. When you see those names, you know they are just literally about that structure, and you can see from how those words are used what the shape of the structure is.


LISP was one of the very first high order programming languages (came around the same time as FORTRAN).

If anything, other languages are different for the sake of being different.

Btw, Clojure makes those exact changes you mention (first instead of car etc.)


CAR and CDR have origins in a hardware feature of the IBM 704. But they came into Lisp via Fortran. The list processing in Lisp was closely inspired by FLPL: the Fortran List Programming Language: a system for linked list manipulation done in Fortran. FLPL had functions like XCARF and XCDRF and others. MacCarthy greatly improved the naming by dropping the gratuitous X...F noise. (What had they been thinking?)


> Clojure makes those exact changes you mention

roughly 25 years after Common Lisp introduced first,..,tenth, and rest


Clojure uses “first” and “rest”, as does Racket.


Lispey stuff self-selects its fans quite nicely in this way.


What makes people see non-Lisp-like languages and think to themselves "yep, this is how I want my code to look like" is beyond me


I like the structure that S-expressions convey. Must be something with how my brain works. It's probably also why I find Python code utterly unreadable.


To each their own. There are obviously very smart people choosing Lisp so I’m not trying to knock it down or anything, I just never got the appeal

Every now and then I try to check out a Lisp project and this is my reaction the moment I see the code: https://giphy.com/gifs/seinfeld-bye-jerry-106PwpLIIXJnXi


This is what we see: https://raw.githubusercontent.com/tarsius/paren-face/master/...

Don't look at the parens and don't stop at the superficial look :]


In most cases, an imperative example in a curly-bracket language (JavaScript as one example) has the same number of parens and curly brackets as a Lisp has in parens. They're just in different places.

And, the non-Lisp languages tend to have more rat droppings => , ; . etc.

f is a function which takes two arguments, adds them, and returns the result.

Clojure: (def my-var ["hello" 123 (f 10 20)]) -> my-var: ["hello" 123 30]

([""()]) -> 8 noisy characters

JavaScript: const my_var = ["hello", 123, f(10, 20)]; -> myVar: ["hello", 123, 30]

=["",,(,)]; -> 11 noisy characters

Obviously this difference becomes more pronounced with longer argument lists or array element counts.


For me much of the appeal of the syntax comes from using it with a editor integrated REPL, or when writing macros, or when generating linter configuration from a domain model etc.

However I agree with you on some level. Lisp code can easily be written in a way that’s hard to read. Specifically because of nesting expression way more often than is comfortable for me to read. I much rather prefer let bindings and threading so most of the code reads from left to right.


>To each their own. There are obviously very smart people choosing Lisp so I’m not trying to knock it down or anything, I just never got the appeal

Just know that almost no one STARTS off by liking Lisp-Syntax, but oh boy does it grow on you.


For me, S-expressions remove a whole layer of stuff to think about and remember. It normalizes everything.

And you get the bonus of being able to construct or parse code from within code, since the structure is so uniform and flat.


I totally agree with this, but my love of Lisps wasn't instantaneous. When it clicked though — which required some effort — and I could appreciate the elegance of the language, I didn't want to use anything else.

Regarding Python, I've always disliked its functional whitespace and OO inclination. Given its popularity, though, I'm clearly a minority!

TL;DR — to each their own.


What I noticed is, that in the beginning most people (including me) think like this. It's like ()-SOUP !

Then suddenly after a month or so you suddenly realise wow, this is actually nice and useful and now all other non S-Exp feels jukky.

You get to that point NOT because you are now simply enduring/accepting that it looks ugly but it really after a while becomes pretty !

It's like picking up smoking, no one likes the first or two cigarettes at first, it taste like sht, yet if you keep doing it becomes lovely. *

PS. Don't smoke kids !


"Syntactic sugar causes cancer of the semicolon."

Alan Perlis


To each their own! I feel the same way about Algol family languages, too many arbitrary design decisions that have been internalised for decades at this point for my liking.


There are many incredible foods that we cannot imagine putting in our mouth as children, but usually we grow up and discover how wonderful they are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: