I used to use redux-orm which offered a similar promise. This library looks seems more mature and thought out, kudos!
My app is still alive and kicking and I ended up implementing many of the ideas here myself, in a less reusable fashion.
A primary motivator was that as the use cases evolve, a storage model optimized for application specific access patterns becomes compelling.
The convenience of having properly implemented relational semantics and performance enhancements of Indices is huge, but the risk of hitting assumptions that don't fit your use case is also non-trivial
Thanks for the feedback! I'm going to push back on your assertion that it's mature since it's about two days old and I only revved it to v1.0.3 because of some .d.ts resolution issues :)
I agree with your very final point. The same might be true of the undo stack implementation too - at some point the usage patterns might invalidate some of my performance assumptions.
I'm pleased with the performance but I stopped short of quoting numbers, because hardware. If you compile and `npm run testPerf` on your own machine you will see a bunch of benchmarks that clock most operations in low µs.
When I first started thinking about this project, classes still meant a bunch of transpilation overhead (which dates the origin story!). I persuaded myself that plain old objects with TypeScript interfaces would strike a better balance between performance, size, and developer experience.
Combined with any type of compression the negligible difference is eliminated. Though it is true that static functions might be more optimisable in some cases, I'd say that on balance, the effect on maintainability is too significant for the returns. I'll take DI and testability over maybe a few kb difference
Laravel has a similar convention, wrapping everything in static classes. I hated it at first - and still hate the convention outside Laravel - but in context it works well. Not for minification, but rather for reducing cognitive load while maintaining code while preserving good separation of concerns.
I don't think there's dozens, but there's a fair few. Most of the stuff in the common directory seems to be standard patterns they use, so it's doing more than just wrapping a function.
When I built this I used immer under the hood. But actually I think it is not required. I'd have to have another look. I'm not really a front end dev but I just felt the whole "store" concept in svelte was incomplete and I could never understand the react/redux model. I had implemented something in .net years ago to simulate functional lenses and when I looked into immer I saw this JS proxy thing and realised I could easily build fully reactive lenses for svelte. In the end mostly an academic exercise as it is used in one internal office website and my day job is writing algorithms for CNC machines.
Web apps are so over-complicated because of piecemeal client side data stores and the need to sync with the latest fashion in api protocol.
Imagine how simple apps would be if your data model and query interface was exactly the same on the server and client. With automatic caching and fetching logic.
There is such an incredibly complex chain of code to do the simplest of things.
What we lack is a good client-side relational+graph database that has an identical server-side implementation.
The requirements on clients are fundamentally different than on the server. On clients, you want at least:
1. Batching, to avoid network waterfalls (a consequence of network use being more expensive on clients than on the server).
3. Data dependencies, so the framework knows how to stitch queries together when batching them.
3. Consistency, so all UIs update when an update comes in (a consequence of SPAs being long-lived, as opposed to the typical stateless server request).
These all take some code/framework overhead. It’s valuable to pay that cost on clients, but is unnecessarily verbose for the server.
People have been trying to unify local and remote function calls for years, and it’s a similar problem: the two are fundamentally different.
"code/framework overhead" is fine as long as its hidden in a library. Too many people re-implement a data layer client-side that is always a poor mapping to the server data model. My frontend should act as if its running off a local database and sync with the server. It could batch together all requests made to this database that it cannot fulfil or that are stale.
SQLite on WASM[0] is absolutely what you are looking for. There is also “Absurd SQL”[1] which extends it to use indexedDB as a VFS for storage allowing proper atomic transactions and not loading the whole thing into memory.
Combine it with the various JavaScript ORMs and you have a nice developer UX.
I’m waiting for someone with more time than myself to build a syncing feature on top of SQLites Sessions[2] so that changes locally are synced back to the server.
(Feel like I’m a cheer leader posting this comment every week)
Absurd is really cool. I've always thought it would be great to have an SQL db implemented in native JS though.
With the database running in the same process as your app, you could store native JS Objects of your table rows in the cache, which you could bind events to. So if you modify a row in a table in the db, events could be instantly propagated to the UI. Haven't thought it through fully yet.
SQL is also a bit cumbersome for things like deeply nested data, hierarchies, trees, graphs which a lot of client-side application state ends up being.
Would this suffer from the overhead of using WASM, both to make the calls and to transfer the data back to the UI thread?
I'm not sure but I expect it would depend on your use cases. High frequency i/o of small data might be slower compared to low frequency i/o of very large data.
Mind you I'm working off of a cursory understanding of WASM performance issues from 2 years ago. Maybe this has changed a lot, or my understanding was incorrect in the first place. Do you know much about this?
There's Datascript[0] and Datomic[1]. While not "identical", they are definitely complimentary. There's a (now defunct) library[2] for keeping them in sync too.
PouchDB has offered something like that for close to a decade now, if I'm not mistaken. Might be worth looking into. It is document-based, though, rather than relational+graph, though I suppose you could build graph utilities on top of it.
PouchDB is incredible, so much respect for the devs that built it.
The trouble is though, it is now quite an ageing codebase with relatively little maintenance. The original developers have moved on and it’s a little neglected. A few community members have picked up the mantle in the last 6 months but I would be careful picking it for something new.
Last time I looked the bug tracker auto closed tickets after a month if there was no activity. That makes it look like there are few bugs being tracked. Problem is there is loads, they are just all closed.
RxDB does this. Recently PouchDB integration is more abstracted to swap it out but it's still the best client side database. RxDB syncs with GraphQL and Pouch/CouchDBs
There's WatermelonDB which uses IndexedDB on web and SQLite on native, it's nice for syncing to custom backends.
There's GUN and Orbit for distributed graph databases.
Ontopic: TinyBase looks really really nice, fills the gap in-between hefty client side databases and state system solutions like Redux + Persist. I'd like to see Redux middleware integration for time travel debugging, event lots, and snapshotting if possible. The analytics and rollback APIs are a nice touch. Size is enticing.
Orbit.js does some of these things and coordinates client with a variety of data sources through a standard set of interfaces and using normalized data structures.
I believe Orbit.js was inspired by Ember Data, as I know Dan Gebhardt is involved with Ember.js and https://jsonapi.org
The entire point of an API is decoupling. A decoupled API makes double sense for a SPA. It's the client-server architecture on a silver platter.
This idea got muddled when people started making websites as SPAs, which necessitated SSR for SEO and first-page load. Now you blurred the lines between client and server, otherwise known as isomorphism, which is a good sounding word for a bad idea.
You can see this in action in the TinyDraw demo: https://tinybase.org/demos/tinydraw - make some changes, refresh the page, even open it up in two windows of the same browser at the same time.
I've been using react with typescript since 2019 and I haven't encounter any problems using Objects to manage data. Recoiljs is type safe and very efficient.
The website does a great job explaining how to use the api, but it would be nice to see some functional examples as well. Its difficult to imagine for me, in which use cases Tinybase is shining.
I really wonder, as a Redux user, how to deal with a state that is a deeply connected graph.
You cannot exactly rely upon the … keyword to simplify the definition of the new state.
I really like the concept. We use Supabase as the backend for a Vue application.
We end up pulling in very large amounts of data to the frontend (details on every alert and event in kubernetes clusters) and then filter it multiple ways and graph it on a timeline.
So we end up implementing lots of db like logic in typescript. I've long wished for a simpler and more reusable way to do this that still let's me process the data clientside.
Here's an example of a simple timeline graph (built with vegalite) which needs the ability to filter by several columns and do various groupings
I think most dbs, paas, frameworks skip this use case.
For a snappy data-centric app, you pretty much need your entire app logic to run client-side.
Everyone always ends up with this terrible denormalized client-side cache with duplicated Server side logic.
What you really want is the exact same db running server-side and client-side.
I thought for a while this would be an sql db but now I think sql and the relational model is the wrong approach for web app data as they are inefficient for watching queries and syncing.
I'm using supabase which essentially autogenerates the entire server-side part. So all my logic is client side. What's missing is tighter integration between swrv/vue-swrv and the supabase client so that when you run a query it can be smart enough to know if you apply it locally or remotely.
Yes! Though there is an optional ui-react module, this is not a React library per se. There's nothing to stop a Vue equivalent except... my lack of current knowledge about it. Noted!
Hey, I wrote that docs page back in 2016 :) I'd been using an early version of Redux-ORM on a project, there were some Redux repo discussion threads about this problem space, and it seemed like something that was worth documenting.
We did eventually add a `createEntityAdapter` API to Redux Toolkit [0] [1], which handles the process of storing items in normalized form and provides "CRUD"-style reducers for typical operations on that data, but it doesn't provide any support for managing relations specifically.
That topic _has_ come up a few times recently, so I've been contemplating that as a thing we might consider trying in a future version of RTK.
If folks have thoughts on what that API's requirements might look like, I'd be happy to start discussing that over in the RTK repo!
Nice. I wrote a similar thing a couple years ago, even "tinier" :)
https://github.com/dmaevsky/tinyx
Immutable state, undo/redo, immer-like patch recording etc in less then 200 lines of code
This is beautiful and I just installed it for testing in my new app, but I have a sneaking suspicion tiny databases are the sort of thing we’re all better off making from scratch so we can tinker with implementation and learn (even if that’s more work and less “optimal”)
I think for now you'll need to create your own local type definition that models the data you know is in the Store. But let me think a bit more about this.
Now this is something great. Many projects use full fledged frameworks just for using the state management aspects of it. The store module is 2.6 kb which is fantastic!
This allows you to wrap a batch of mutations together and then the fully reconciled change events are all fired together at the end. Apparently I could do a better job of marketing this secret feature :)
Ah no, these aren't transactions in the sense of locking and multiple client access. My glib answer is that there's perk to JavaScript not being multithreaded! But seriously, when persisting/saving a Store, the library doesn't guarantee that you aren't saving over something another browser window might have just written. I would need to think about what that would entail.
"Developers" (I use the term loosely) coming from e.g. React are getting used to storing data in a global state. The Redux library lets one change the value in the global state, and all places that use the value are updated "automatically" (e.g. by tons of behind-the-scenes code that the "developer" on his brand new mac doesn't notice, but users on five year old Windows machines with a dozen open tabs do notice). Vue has a similar implementation, Vuex.
When not using React or Vue, people used to these conventions will not know how to properly pass state between objects. This library is for them. It really offers little more than `window.pets.dog.species = "poodle"` would offer but because it is a library it feels "less hacky".
Global variables and random stuff self initializing and writing to those global variables makes me cry on the inside every time I see it. You basically create a hard to (unit) test mess because by doing that. Because of this, a lot of frontend developers never get into the habit of unit testing because they don't even realize that the reason it is hard is self inflicted. Because of the use of global variables combined with a failure to separate the business of creating stuff (plumbing/glue code) from using that stuff. I.e. business logic that should be tested.
So, you create a thing that creates more things that live in global variables that is impossible to untangle from all the other bits of code that create yet more global variables and end up depending on each other's global variables having particular names that are hard-coded left, right, and center. Basically, any attempt to test anything ends up firing most or all of the code at which point it's not a unit test but an integration test. If you need an entire browser running to test a thing, it's because it breaks this rule of separating glue code from business logic. Don't do that.
If you fix this, you can create stuff in a slightly different way in your tests (e.g. use stubs or mocks) and test them in isolation. And the same properties actually also make it easier to reuse things.
I've been using kotlin-js for the last year with a reactive web framework called fritz2 that is loosely inspired by React. One of the first things I did there was bring in koin, which is a dependency injection framework that many Android developers would know (similar to Dagger, which is a popular alternative). At startup time, koin ensures anything that has dependencies gets their dependencies and that those dependencies get created. Koin doesn't really do anything fancy or expensive. It just forces you to separate your plumbing and glue code from your business logic and makes that easier than doing it manually would be (which isn't all that hard, actually).
Implementing something like koin in javascript would not be hard and there are probably multiple npms that do that. And you can do this manually (it's called DIY dependency injection). But you need to know to do this and be a bit disciplined about using it. You can do this in almost any language actually and it's almost never a bad idea and almost always a mistake not to separate your glue and business logic. Most bigger projects get organized on this front at some point; or they just fail. That happens a lot with Javascript code written by developers that don't understand how to properly structure their code.
Just for the record, I developed TinyBase on a 12" 2016 1.1GHz MacBook! Constraints are good :) - and it keeps me honest with respect to performance, size, and toolchain.
It’s good for not introducing convoluted ideas that other state management libraries have been guilty of. You store some shit, and retrieve some shit. Sounds about right. The extra stuff is nice, but it doesn’t seem like the library is making you use schemas, and so on (especially when all most people want to do is store some values).
Now, if we can get some autocomplete going on with those schemas, that would would be great when inevitably the global store becomes a giant blob.
My app is still alive and kicking and I ended up implementing many of the ideas here myself, in a less reusable fashion. A primary motivator was that as the use cases evolve, a storage model optimized for application specific access patterns becomes compelling.
The convenience of having properly implemented relational semantics and performance enhancements of Indices is huge, but the risk of hitting assumptions that don't fit your use case is also non-trivial