Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: InstantDB – A Modern Firebase (github.com/instantdb)
1145 points by nezaj 7 months ago | hide | past | favorite | 297 comments
Hey there HN! We’re Joe and Stopa, and today we’re open sourcing InstantDB, a client-side database that makes it easy to build real-time and collaborative apps like Notion and Figma.

Building modern apps these days involves a lot of schleps. For a basic CRUD app you need to spin up servers, wire up endpoints, integrate auth, add permissions, and then marshal data from the backend to the frontend and back again. If you want to deliver a buttery smooth user experience, you’ll need to add optimistic updates and rollbacks. We do these steps over and over for every feature we build, which can make it difficult to build delightful software. Could it be better?

We were senior and staff engineers at Facebook and Airbnb and had been thinking about this problem for years. In 2021, Stopa wrote an essay talking about how these schleps are actually database problems in disguise [1]. In 2022, Stopa wrote another essay sketching out a solution with a Firebase-like database with support for relations [2]. In the last two years we got the backing of James Tamplin (CEO of Firebase), became a team of 5 engineers, pushed almost ~2k commits, and today became open source.

Making a chat app in Instant is as simple as

    function Chat() {
      // 1. Read
      const { isLoading, error, data } = useQuery({
        messages: {},
      });
    
      // 2. Write
      const addMessage = (message) => {
        transact(tx.messages[id()].update(message));
      }
    
      // 3. Render!
      return <UI data={data} onAdd={addMessage} />
    }
Instant gives you a database you can subscribe to directly in the browser. You write relational queries in the shape of the data you want and we handle all the data fetching, permission checking, and offline caching. When you write transactions, optimistic updates and rollbacks are handled for you as well.

Under the hood we save data to postgres as triples and wrote a datalog engine for fetching data [3]. We don’t expect you to write datalog queries so we wrote a graphql-like query language that doesn’t require any build step.

Taking inspiration from Asana’s WorldStore and Figma’s LiveGraph, we tail postgres’ WAL to detect novelty and use last-write-win semantics to handle conflicts [4][5]. We also handle websocket connections and persist data to IndexDB on web and AsyncStorage for React Native, giving you multiplayer and offline mode for free.

This is the kind of infrastructure Linear uses to power their sync and build better features faster [6]. Instant gives you this infrastructure so you can focus on what’s important: building a great UX for your users, and doing it quickly. We have auth, permissions, and a dashboard with a suite tools for you to explore and manage your data. We also support ephemeral capabilities like presence (e.g. sharing cursors) and broadcast (e.g. live reactions) [7][8].

We have a free hosted solution where we don’t pause projects, we don’t limit the number of active applications, and we have no restrictions for commercial use. We can do this because our architecture doesn’t require spinning up a separate servers for each app. When you’re ready to grow, we have paid plans that scale with you. And of course you can self host both the backend and the dashboard tools on your own.

Give us a spin today at https://instantdb.com/tutorial and see our code at https://github.com/instantdb/instant

We love feedback :)

[1] https://www.instantdb.com/essays/db_browser

[2] https://www.instantdb.com/essays/next_firebase

[3] https://www.instantdb.com/essays/datalogjs

[4] https://asana.com/inside-asana/worldstore-distributed-cachin...

[5] https://www.figma.com/blog/how-figmas-multiplayer-technology...

[6] https://www.youtube.com/live/WxK11RsLqp4?t=2175s

[7] https://www.joewords.com/posts/cursors

[8] https://www.instantdb.com/examples?#5-reactions




[Firebase founder] The thing I'm excited about w/Instant is the quad-fecta of offline + real-time + relational queries + open source. The amount of requests we had for relational queries was off-the-charts (and is a hard engineering problem), and, while the Firebase clients are OSS, I failed to open source a reference backend (a longer story).

Good luck, Joe, Stopa and team!


I always assumed that an architectural decision had prevented relational queries in Firebase.

It was jarring to find out that indexes are required for every combination of filters your app applies, but then you quickly realize that Firebase solves a particular problem and you're attempted to shoehorn into a problem-space better solved by something like Supabase.

It's not too dissimilar to DynamoDB vs RDB.


> I always assumed that an architectural decision had prevented relational queries in Firebase.

Seems the biggest problem is that Firebase doesn't have relations. How can you query that which does not exist?

I'm guessing what they really want is SQL? Once upon a time when I was stuck on a Firebase project I built a SQL (subset) engine for Firebase to gain that myself, so I expect that is it.


Building a logistics app, I wish I could query in Firebase for items that don’t have a “shipped” field.

But I can’t.


Technically you can: Scan all of the documents. A "relational query language" would have to do the same thing.


That wouldn’t be querying though, right?

Grabbing all the docs in the db into my controller and filtering down that array is what Firebase makes me do instead of writing queries.


> That wouldn’t be querying though, right?

Why not? You're asking a question, of sorts, and getting an answer from the result. That's the literal definition of querying.

> Grabbing all the docs in the db into my controller and filtering down that array is what Firebase makes me do instead of writing queries.

A query language is just an abstraction. One you can have in your code. At some point you still need to "grab all the docs into a controller and filter them down", though. You could push that step into the Firebase service, but it would still have to do the same thing you're doing. There is no magic.

Better would be to provide something indexable so that you don't have to go through all the docs, but you can't index that which does not exist.


Thanks for creating Firebase!

It's really the definition of an managed database/datastore.

Do you see InstantDB as a drop in replacement ?

To be honest I don't want to have to worry about my backend. I want a place to effectively drop JSON docs and retract them later.

This is more than enough for a hobbyist project, though I imagine at scale things get might not work as well.


For what it's worth, we designed Instant with this in mind. Schema is optional, and you can save JSON data into a column if you like.

If you wanted to store documents, you could write:

```

useQuery({docs: {}}) // get documents

transact(tx.docs[docId].update({someKey: someValue}); // update keys in a doc

transact(tx.docs[docId].delete()) // delete the doc

```


Thanks for the response.

2 questions.

How hard is it to swap our firebase for instant? I've had an amazing time with firebase, but I sorta want to switch to using a completely local solution.

I have a small lyric video generator, and while I don't care about my own songs potentially leaking, I would never want to take responsibility for something else's data. I basically use firebase for the lyrics afterwards I transcribe them .

Second, do you offer your own auth or just integrate with other solutions.


Glad I could be helpful.

> How hard is it to swap our firebase for instant? I've had an amazing time with firebase, but I sorta want to switch to using a completely local solution.

It should be relatively straightforward to switch. If you have any questions, you could always reach to us on Discord [1]

The only caveat though: Instant is like Firebase; it is not a completely local solution. If you are worried about exposing some data over the internet, I would store the same kind of stuff you were thinking about with Firebase.

> Second, do you offer your own auth or just integrate with other solutions.

We offer our own auth. You have magic code emails and Google Sign in out of the box. We also expose auth functions in admin SDK, in case you want to create a custom solution. [2]

[1] https://discord.com/invite/VU53p7uQcE [2] https://www.instantdb.com/docs/auth


I'm actually imagining telling end users to host the server locally via Docker. I have some other functionality, like the actual lyric transcription, I need docker for.

Thank you for your help. I'll definitely look into this for my next project


> Instant is like Firebase; it is not a completely local solution. If you are worried about exposing some data over the internet, I would store the same kind of stuff you were thinking about with Firebase.

What does this mean exactly? If you host your own it is still not local?


If you only need simple dropping and collecting back, maybe you should consider about AWS S3 or Supabase storage.


Ohh, I still need a database, I just need the JSON doc format.


Or a key-value store, if the size is limited and speed is essential.


This is an aside but “trifecta but with four” actually has an awesome name: “Superfecta”!


Tetrafecta would be cooler


Cursory googling says tetra is Greek and perfect is Latin, so its a bastard word like erogenous or television.


Side point: television is a bastard word, but erogenous is not Greek eros + Latin genus; it's all greek (ἐρωτογενής — it would be erotogenous in English since the root of the word eros is erot- , but the extra syllable was dropped; another example of differentiated transliteration is φωτογενής which became photogenic instead of photogenous).


I have just written an essay about the word water in the sibling post comments.

One thing I discovered in the process is that the word water comes to english all the way from Proto Indo European. The word hydro, however, comes from ancient greek, which comes from the same PIE word for water.


> bastard word like ... television.

From where I am, it is simply telly, because why not bastardise it some more while we are at it.


I would probably avoid naming Firebase alternatives with a prefix like “Super” at this time.


I am dumb. Why? Was there some failed/controversial thing?


Probably because of Supabase, I think.


https://en.wikipedia.org/wiki/Superfecta

Sounds like someone made up the name to just sound better than trifecta. It's marketing speak.

Also, as the link says, it has been used to mean more than four. And other languages use their own equivalent of "quadfecta" instead.

Plus, I knew exactly what "quadfecta" meant, but would have no idea about "superfecta".


"Supafecta"


You probably heard this a million times but I still remember trying that simple firebase demo of draw in one box; see the results in another and being amazed. That was one of my pushes out of boring enterprise software death by configuration and into software creation based on modern OSS products.


Was pretty neat to see your investment/involvement!

Made me feel quite old that Firebase is no longer "modern" though...


Awesome to see this launch and to see James Tamplin backing this project.


Thank you James!


If we only had doSQL() for everything.


One bit of feedback: Its always appreciated when code examples on websites are complete. Your example isn't complete -- where's the `transact` import coming from, or `useQuery`? Little minor details that go far as your product scales out to a wider user base.


Thank you for the feedback, this makes sense!

I updated the example to include the imports:

```

import { init, tx, id } from "@instantdb/react";

const db = init({ appId: process.env.NEXT_PUBLIC_APP_ID, });

function Chat() {

  // 1. Read
  const { isLoading, error, data } = db.useQuery({
    messages: {},
  });

  // 2. Write
  const addMessage = (message) => {
    db.transact(tx.messages[id()].update(message));
  };

  // 3. Render!
  return <UI data={data} onAdd={addMessage} />;
}

```

What do you think?


Much better!


Yes. This gives users the vibe of “ this is obvious, if you don’t know it , you are dumb “ .


Or that the writers were oblivious, and the documentation shouldn’t be relied upon.


I usually read: you have so few users / care so little that I shouldn’t trust you because you’d have heard this complaint and fixed it. But in this case it was fixed up quickly. Which is great


lol i was wondering the same thing


this is my question too, lol


For those looking for alternatives to the offline first model, I settled on PowerSync. Runner up was WatermelonDB (don't let the name fool you.) ElectricSQL is still too immature, they announced a rewrite this month. CouchDB / PocketDB aren't really up to date anymore.

Unfortunately this area is still immature, and there aren't really great options but PowerSync was the least bad. I'll probably pair it with Supabase for the backend.


Co-founder of PowerSync here. Would love to hear what you would like to see improved in PowerSync :) Thanks!


The docs for React Native. I had to piece together how to do stuff from the code examples and a YouTube video tutorial because they're pretty sparse with information, and missing a cohesive tutorial that could get me setup with CRUD locally. Plus the initial setup process from the npm page, which itself was notable for how much was required.

I haven't attempted to setup the backend yet, so that's my feedback so far.


Thanks, appreciate the feedback.


ElectricSQL before their announced rewrite worked fully offline and could sync when the clients became online again. Now, that functionality with their rewrite is somewhat removed, as they expect you to handle clientside writes by yourself, which is what I believe PowerSync does as well, am I correct in that understanding? If I wanted a fully offline clientside database that could then sync to all the other clients when online, what would I do? I am looking for this in the context of a Flutter app, for reference.


> Now, that functionality with their rewrite is somewhat removed, as they expect you to handle clientside writes by yourself, which is what I believe PowerSync does as well, am I correct in that understanding?

Yes, that is correct.

> If I wanted a fully offline clientside database that could then sync to all the other clients when online, what would I do? I am looking for this in the context of a Flutter app, for reference.

This is what PowerSync provides by default. If you haven't done so yet, I would suggest starting with our Flutter client SDK docs and example apps — and feel free to ask on our Discord if you have any questions or run into any issues :)


I use TinyBase for the client side store, it can sync with pretty much all the technologies people are talking about here

https://tinybase.org/


Seems like that is only Javascript based. I like ElectricSQL and PowerSync because they're on the database layer and are client agnostic.


Yea you can use it client side and sync to PowerSync or electricsql if you aren't using js on the backend

https://tinybase.org/guides/persistence/database-persistence...


> CouchDB / PocketDB aren’t really up to date anymore.

Source? I’ve been using CouchDB as my game world DB for years, works fine for me?


The biggest "source" of vibes that CouchDB/PouchDB is "dead/maintenance mode" is the corporate ecosystem/contributors around it:

- Couchbase has been increasingly moving away from CouchDB compatibility

- Cloudant was one of the more active contributors until it got eaten by IBM and put into a maintenance spiral (what mother can love what IBM "Blue Mix" has done to Cloudant?)

- In general the still growing number of document DBs that are Mongo-compatible but not CouchDB-compatible (AWS and Azure document DB offerings, for instance)

In Open Source the winds of commercial favor aren't always reflective of Open Source contributor passion, but there too the pace of PouchDB seemed to greatly slow down a few years ago, and lost the interest of some major contributors. CouchDB itself seems to have gotten hugely stuck in a bunch of Apache committees over the design of the next semver major version, with a ton of huge breaking changes that don't really seem to be for solving problems but do some architecture battle under the hood, some political war between Erlang and other programming languages for superiority, and some political war between Apache trying to consolidate core functionality with some of the other database-like engines in their ~~graveyard~~ custodianship.


I'm wary of stuff like this, probably really useful to rapidly iterate.... but what a maintence nightmare after 10 years and your schema has evolved 100 times, but you have existing customers in various state of completeness. I avoided firebase when it came out for this reason. I had a few bad experiences with maintaining applications built on top of Mongo that made it to production. It was a nightmare.


We hear you on the pain for evolving NoSQL schemas. [1]

For what it's worth, we're built on top of Aurora and support relations so evolution should be much easier!

[1] https://mdp.github.io/2017/10/29/prototyping-in-the-age-of-n...


This is why I’ve always stayed well behind the bleeding edge but still within earshot if anything comes along that sounds like it’s of interest to me. I usually code to work and not for pleasure, although I do a little web programming for friends, but I still use jQuery and Typescript for that, I think the only “new”thing I use is tailwind, which is a bit of a game changer for what I like to do, I never liked CSS but it worked well enough for my needs.


Did they say schemas aren’t supported, or is that implied by the firebase label?


We support schemas! You can build them in the GUI or manage them as code [1]

[1] https://www.instantdb.com/docs/schema


I saw the reference to “apps like Figma” and as one of the people that worked on Framer’s (also a canvas based app) database which is also local+multiplayer I find it hard to imagine how to effectively synchronize canvas data with a relational database like Postgres effectively. Users will frequently work on thousands of nodes in parallel and perform dragging updates that occur at 60 FPS and should at least be propagated to other clients frequently.

Does Instant have a way to merge many frequent updates into fewer Postgres transactions while maintaining high frequency for multiplayer?

Regardless this is super cool for so many other things where you’re modifying more regular app data. Apps often have bugs when attempting to synchronize data across multiple endpoints and tend to drift over time when data mutation logic is spread across the code base. Just being able to treat the data as one big object usually helps even if it seems to go against some principles (like microservices but don’t get me started on why that fails more often than not due to the discipline it requires).


Good point on the update frequency, I believe it is a must to batch the requests and responds for any of this type of lib/service to work in a production environment, a performance report/comparison is still required for ppl to get the idea if this is good to support their business model.

About the synchronized data though I think it's not about the database but the data types designed to sync the data? I worked on multiple-player canvas games and we didn't really care that much about relational db or document db, they worked both fine. I would love to know what's the difference and the challanges.


We do indeed batch frequent updates! Still many opportunities for improvements there, but we have a working demo of a team-oriented tldraw [1]

[1] https://github.com/jsventures/instldraw


We would love to hear more about the architecture you used at Framer. Would you be up for a coffee? My email is stopa@instantdb.com


Would love to hear how you went about doing things at Framer!


Congrats on the launch! :)

Apparently I signed up for Instant previously but completely forgot about it. Only realized I had an account when I went to the dashboard to find myself still logged in. I dug up the sign up email and apparently I signed up back in 2022, so some kind of default invalidation period on your auth tokens would definitely make me a bit more comfortable.

Regardless, I'm still as excited about the idea of a client-side, offline-first, realtime syncing db as ever, especially now that the space has really been picking up steam with new entrants showing up every few weeks.

One thing I was curious about is how well the system currently supports users with multiple emails? GitHub popularized this pattern, and these days it's pretty much table stakes in the dev tools space to be able to sign in once and use the same account across personal accounts and orgs associated with different emails.

Looking at the docs I'm getting the sense that there might be an assumption of 1 email per user in the user model currently. Is that correct? If so, any plans to evolve the model to become more flexible?


Noted about the refresh tokens, thank you!

> One thing I was curious about is how well the system currently supports users with multiple emails? GitHub popularized this pattern, and these days it's pretty much table stakes in the dev tools space to be able to sign in once and use the same account across personal accounts and orgs associated with different emails

Right now there is an assumption of 1 `user` object per email. You could create an entity like `workspace` inside Instant, and tie multiple users together this way for now.

However, making the `user` support multiple identities, and creating recipes for common data models (like workspaces) is on the near-term roadmap.


Congrats on the launch! I think Firebase was started in 2011, and it's incredible that 13 years later the problem is still unsolved in an open way. We took a shot at this at RethinkDB but fell short. If I were doing this again today, Instant is how I would build it. Rooting for you!


I really appreciate your message Slava. Your essays were really influential for us.


I've been using Instant for about 6 months and have been very happy. Realtime, relational, and offline were the most important things for us, building out a relatively simple schema (users, files, projects, teams) that also is local first. Tried a few others unsuccessfully and after Instant, haven't looked back.

Congrats team!


It's been great iterating with you AJ! Can't wait for what's ahead.


What's the short summary of how the authorization system works for this?

One of the things I find quite nice about firebase is the quite powerful separation between the logic of data retrieval / update and the enforcement of access policy -- if you understand it you can build the prototype on a happy path with barely any authorization enforcement and then add it later and have quite complete confidence that you aren't leaking data between users or allowing them to change something they shouldn't be able to. Although you do need to keep the way this system works in mind as you build and I have found that developers often don't really grasp the shape of these mechanisms at first

From what I can tell -- the instant system is different in that the permission logic is evaluated on the results of queries -- vs firebase which enforces whether the query is safe to run prior to it even being executed ...


> What's the short summary of how the authorization system works for this?

We built a permission system on top of Google's CEL [1]. Every object returned in a query is filtered by a 'view' rule. Similarly, every modification of an object goes through a 'create/update/delete' rule.

The docs: https://www.instantdb.com/docs/permissions

The experience is similar to Firebase in three ways:

1. Both languages are based on CEL 2. There's a distinct separation between data retrieval and access policy 3. You can start on a happy path when developing, and lock down later.

AFAIK, Firebase Realtime can be more efficient, as it can tell if a permission check has passed statically. I am not sure if Firestore works this way. We wanted to be more dynamic, to support more nuanced rules down the road (stuff like 'check this http endpoint if an object has permissions'). We took inspiration Facebook's 'EntPrivacy' rules in this respect.


> Every object returned in a query is filtered by a 'view' rule. Similarly, every modification of an object goes through a 'create/update/delete' rule.

Is that efficient for queries that return many rows but each user only has access to a few?

Is there a specific reason to not use something like postgresql RLS that would do the filtering within the database where indexes can help?


Yes, reading the essay, that seems like the only "red flag" to me, the rest sound like a dream db.

Not being able to leverage permission rules to optimize queries (predicate pushdown) seems like too big a compromise to me. It would be too easy to hit pathological cases, and the workaround would probably be something akin to replicating the permission logic in every query. Is there any plans to improve this?


Yes, in the near future we plan to convert CEL expressions to where clauses, which we attach to queries. This would push permissions to the query level, like postgres RLS.


Great, and congrats on the launch!


> Firebase Realtime can be more efficient, as it can tell if a permission check has passed statically. I am not sure if Firestore works this way.

Firestore's rules are also able to prove before the query runs if the query will only return data that the user has access to according to the rules. That's a pretty important property that "rules aren't filters" because it prevents bad actors from DDOSing your system. My former colleague wrote about this: https://medium.com/firebase-developers/what-does-it-mean-tha...


While it seems inflexible at first this system is surprisingly capable and provides great DX, one of the best things about working with firestore.


I've found triple stores to have pretty poor performance when most of your queries fetch full objects, or many fields of the same object, which in the real world seems to be very common.

Postgres also isn't terrible, but also not brilliant for that use case.

How has your experience been in that regard?


It’s not quite the same thing but nearby:

I built a EAV secondary index system on top of Postgres to accelerate Notion’s user-defined-schema “Databases” feature about a year ago. By secondary index, I mean the EAV table was used for queries that returned IDs, and we hydrated the full objects from another store.

We’d heard that “EAV in Postgres is bad” but wanted to find out for ourselves. Our strategy was to push the whole query down to Postgres and avoid doing query planning in our application code.

When we first turned it on in our dogfood environment, the results looked quite promising; large improvement compared to the baseline system at p75, but above that things looked rough, and at p95 queries would never complete (time out after 60s).

It worked great if you want to filter and sort on the same single attribute. The problem queries were when we tried to query and sort on multiple different attributes. We spent a few weeks fixing the most obviously broken classes of query and learned a lot about common table expressions, all the different join types, and strategies for hinting the Postgres query planner. Performance up to p95 was looking good, but after p95 we still had a lot of timeout queries.

It turns out using an EAV table means Postgres statistics system is totally oblivious to the shape of objects, so the query planner will be very silly sometimes when you JOIN. Things like forget about the Value index and just use a primary key scan for some arms of the join because the index doesn’t look effective enough.

It was clear we’d need to move a lot of query planning to the application, maintain our own “table” statistics, and do app joins instead of Postgres joins if Postgres was going to mess it up. That last part was the last nail in the coffin - we really couldn’t lean on join in PG at all because we had no way to know when the query planner was going to be silly.

It was worth doing for the learning! I merged a PR deleting the EAV code about a month ago, and we rolled out a totally different design to production last week :)


I really love Postgres, but I'll never not laugh at the fact that duplicating a CTE caused my query to go faster... (60s to 5s)

Postgres really trips up when you start joining tables

Sometimes you can fix it with "(not) materialized" hints, but a lot of the time you just have to create materialized views or de-normalize your data into manual materialized views managed by the application


Does postgres not have the ability to hint or force indexes?

Long long time ago, I found that quite helpful with MySQL.


It does not, and that fact is the #1 downside of Postgres. It is not predictable or controllable at scale, and comes with inherent risk because you cannot “lock into” a good query plan. I have been paged at 3 am a few times because Postgres decided it didn’t like a perfectly reasonable index anymore and wanted to try a full table scan instead :(


Nope, weirdly Postgres still doesn't have that ability even today.


It’s not in core, but there are multiple extensions that provide this functionality


I've also found triple stores to have terrible performance, but it looks like the intended use-case for this (like Firebase) is rapid development, prototyping, and startups. You aren't going to generate enough traffic when you're building an MVP for this to be an issue.

And it's a hosted service, so the performance issues are for the InstantDB team to worry about, and they can fold it into the price they charge. It does mean that your application architecture will get locked in to something that costs a fortune in server bills when it gets big, but from InstantDB's POV, that's a feature not a bug. From your POV as a startup it may be a feature as well, since if you get to that point you'll like have VC to blow on server bills or use to rewrite your backend.


So far we haven't hit intractable problems with query performance. One approach that we could evolving too down the road is similar to Tao [1]. In Tao, there are two tables: objects and references. This has scaled well for Facebook.

We're also working on an individual Postgres adapter. This would replace the underlying triple store with a fully relational Postgres database.

[1] https://www.usenix.org/system/files/conference/atc13/atc13-b...


> In Tao, there are two tables: objects and references. This has scaled well for Facebook.

That's a rather tremendous oversimplification, unless something major changed in recent years. When I worked on database infra at FB, MySQL-backed TAO objects and associations were mapped to distinct underlying tables for each major type of entity or relationship. In other words, each UDB shard had hundreds of tables. Also each MySQL instance had a bunch (few dozen?) of shards, and each physical host had multiple MySQL instances. So the end result of that is that each individual table was kept to a quite reasonable size.

Nor was it an EAV / KV pattern at all, since each row represented a full object or association, rather than just a single attribute. And the read workload for associations typically consisted of range scans across an index, which isn't really a thing with EAV.


I really want an ActiveRecord-like experience.

In ActiveRecord, I can do this:

```rb

post = Post.find_by(author: "John Smith")

post.author.email = "john@example.com"

post.save

```

In React/Vue/Solid, I want to express things like this:

```jsx

function BlogPostDetailComponent(...) {

  // `subscribe` or `useSnapshot` or whatever would be the hook that gives me a reactive post object

  const post = subscribe(Posts.find(props.id));

  function updateAuthorName(newName) {
    // This should handle the join between posts and authors, optimistically update the UI

    post.author.name = newName;

    // This should attempt to persist any pending changes to browser storage, then
    // sync to remote db, rolling back changes if there's a failure, and
    // giving me an easy way to show an error toast if the update failed. 

    post.save();
  } 

  return (
    <>
      ...
    </>
  )
}

```

I don't want to think about joining up-front, and I want the ORM to give me an object-graph-like API, not a SQL-like API.

In ActiveRecord, I can fall back to SQL or build my ORM query with the join specified to avoid N+1s, but in most cases I can just act as if my whole object graph is in memory, which is the ideal DX.


Absolutely. Instant has similar design goals to Rails and ActiveRecord

Here are some parallels your example:

A. ActiveRecord:

```

post = Post.find_by(author: "John Smith") post.author.email = "john@example.com" post.save

```

B. Instant:

```

db.transact( tx.users[lookup('author', 'John Smith')].update({ email: 'john@example.com' }), );

```

> In React/Vue/Solid, I want to say express things like this:

Here's what the React/Vue code would look like:

```

function BlogPostDetailComponent(props) {

  // `useQuery` is equivelant to the `subscribe` that you mentioned:

  const { isLoading, data, error } = db.useQuery({posts: {author: {}, $: {where: { id: props.id }, } })
  
  if (isLoading) return ...
  
  if (error) return .. 
  
  function updateAuthorName(newName) {
  
    // `db.transact` does what you mentioned: 
    // it attempts to persist any pending changes to browser storage, then
    // sync to remote db, rolling back changes if there's a failure, and
    // gives an easy way to show an error toast if the update failed. (it's awaitable)
  
    db.transact(
      tx.authors[author.id].update({name: newName})
    )
  
  }

  return (
    <>
      ...
    </>
  )
}

```


Maybe a dumb question, but why do I have to wrap in `db.transact` and `tx.*`? Why can't I just have a proxy object that handles that stuff under the hood?

Naively, it seems more verbose than necessary.

Also, I like that in Rails, there are ways to mutate just in memory, and then ways to push the change to DB. I can just assign, and then changes are only pushed when I call `save()`. Or if I want to do it all-in-one, I can use something like `.update(..)`.

In the browser context, having this separation feels most useful for input elements. For example, I might have a page where the user can update their username. I want to simply pass in a value for the input element (controlled input)

ex.

```jsx

<input value={user.name} ... />

```

But I only want to push the changes to the db (save) when the user clicks the save button at the bottom of the page.

If any changes go straight to the db, then I have two choices:

1. Use an uncontrolled input element. This is inconvenient if I want to use something like Zod for form validation

2. Create a temporary state for the WIP changes, because in this case I don't want partial, unvalidated/unconfirmed changes written to either my local or remote db.


This is a great question. We are working on a more concise transaction API, and are still in the design phase.

Writing a `user.save()` could be a good idea, but it opens up a question about how to do transactions. For example, saving _both_ user and post together).

I could see a variant where we return proxied objects from `useQuery`.

What would your ideal API look like?


We have an internal lib for data management that’s philosophically similar to linear too. I opted for having required transactions for developer safety.

Imagine that you support the model discussed above where it’s possible to update the local store optimistically without syncing back to the db. Now you’re one missing .save() away from having everything look like it’s working in the frontend when really nothing is persisting. It’s the sort of foot gun that you might regret supporting.

Our model is slightly different in that we require the .save() on objects to create the mutation for the sync. The primary reason is that we’re syncing back to real tables in Postgres and require referential integrity etc to be maintained.

    tx((db) => {
      const author = new Author(db)
      author.save()
      article.name = “New name”
      article.author = author
      article.save()
    }
Mutating an object outside of a transaction is a hard error. Doing the mutation in a transaction but failing to call save within the same transaction is a hard error too.


You make a great point about missing .save().

Mark (our team member) has advocated for a callback-based API that looks a lot like what you landed on. It has the advantage of removing an import too!

Question: how do you solve the 'draft' state issue that remolacha mentioned?


I haven’t seen a better solution than remolacha’s #2 (create separate temporary state for the form).

Forms just inherently can have partially-finished/invalid states, and it feels wrong to try and kraal model objects into carrying intermediary/invalid data for them (and in some cases won’t work at all, eg if a single form field is parsed into structured data in the model)


Exactly that. It’s tempting to try to combine them - we’ve all been there. They’re subtly but inherently different, in my experience.


A thenable API from save() could remove the need of a explicit tx management which is a worse devEx


I’m not quite seeing what you mean. What you mind redoing the example above for my benefit?

We have controllers that all the users actions are funnelled through. The top level functions in there are wrapped in transactions so in practice it’s not something you manually wrangle.


I was thinking about something like:

  author.save()
 .then(() => {
    const article = new Article(db);
    article.name = "New name";
    article.author = author;
    return article.save(); // could be used to chain the next save
 })
 .then(() => {
    console.log("Both author and article saved successfully.");
 })
 .catch((error) => {
    console.error("rollback, no changes");
 });

but I confess I might have said that without having the same understanding of the problem as you so might be nonsense. it just happen that I decided to implement transactions this way in a side project of mine.


Gotcha, thanks for the clarification. I’m not sure what that would buy me here.

I have a rule that I only async if it’s a requirement. In my case I can carry out all the steps in a single (simple) sync action. Our updates are optimistic so we update all the models immediately and mobx reflects that in the react components on the next frame.

The network request for the mutation is the only thing that’s running async. If that fails we crash the app immediately rather than trying to rollback bits in the frontend. I know that approach isn’t for everyone but it works well for us.


@stopachka, sorry for late reply. I've mostly provided my ideal API in the posts above. I think my answer to transactions and forgetting save is to offer a few options, as in ActiveRecord. From what I recall, Rails gives a few ways to make persistent changes:

1. Assign, then save. AFAIK, this is effectively transactional if you're saving a single object, since it's a single `UPDATE` statement in sql. If you assigned to a related object, you need to save that separately.

2. Use ActiveRecord functions like `post.update({title: "foo", content: "Lorem ipsum"})`. This assigns to the in-memory object and also kicks off a request to the DB. This is basically syntax sugar over assigning and then calling `save()`, but addresses the issue around devs forgetting to call `save()` after assigning. In Rails, this is used in 90% of cases.

3. I can also choose to wrap mutations in a transaction if I'm mutating multiple proxy objects, and I need them to succeed/fail as a group. This is rarely used, but sometimes necessary. For example, in Rails, I can write something along the lines of this:

```rb

ActiveRecord.transaction do

  post.title = "Foo"

  post.author.name = "John Smith"

  post.save()

  post.author.save()
end

# Alternatively, using the `update()` syntax

ActiveRecord.transaction do

  post.update({ title: "Foo" })

  post.author.update( { name: "John Smith" })
end

```

This gives transactional semantics around anything happening inside of the `do` block. I think the syntax would look very similar in javascript, for example:

```js

transaction(() => {

  post.update({ title: "Foo" })

  post.author.update( { name: "John Smith" })
})

```


> What would your ideal API look like?

He gave an example in the first post in this chain xD


From what you say, seems like Meteor + React would deliver almost the exact syntax you want, although it's MongoDB instead of SQL.

Reference: https://react-tutorial.meteor.com/simple-todos/02-collection...


Every day, we get closer to what Ember.js did/does.


Is the datalog engine exposed? Is there any way to cache parsed queries?

Other datalog engines support recursive queries, which makes my life so much easier. Can I do that now with this? Or is it on the roadmap?

I have fairly large and overlapping rules/queries. Is there any way to store parsed queries and combine them?

Also, why the same name as the (Lutris) Enhydra java database? Your domain is currently listed as a "failed company" from 1997-2000 (actual usage of the Java InstantDB was much longer)

   https://dbdb.io/db/instantdb
Given that it's implemented clojure and some other datalog engines are in clojure, can you say anything about antecedents?

Some other Clojure datalog implementations, most in open source

- Datomic is the long-standing market leader

- XTDB (MPL): https://github.com/xtdb/xtdb

- Datascript (EPL): https://github.com/tonsky/datascript

- Datalevin ((forking datascript, EPL): https://github.com/juji-io/datalevin

- datahike (forking datascript, EPL): https://github.com/replikativ/datahike

- Naga (EPL): https://github.com/quoll/naga


> Is the datalog engine exposed? Is there any way to cache parsed queries?

We don't currently expose the datalog engine. You _technically_ could use it, but that part of the query system changes much more quickly.

Queries results are also cached by default on the client.

> Other datalog engines support recursive queries, which makes my life so much easier. Can I do that now with this?

There's no shorthand for recursive queries yet, but it's on the roadmap. Today if you had a data model like 'blocks have child blocks', you wanted to get 3 levels deep, you could write:

```

useQuery({ blocks: { child: { child: {} } } });

```

> Also, why the same name as the (Lutris) Enhydra java database?

When we first thought of the idea for this project, our 'codename' was Instant. We didn't actually think we could get `instantdb.com` as a real domain name. But, after some sleuthing, we found that the email server for instantdb.com went to a gentleman in New Zealand. Seems like he nabbed it after Lutris shut down. We were about to buy the domain after.

> Given that it's implemented clojure and some other datalog engines are in clojure, can you say anything about antecedents?

Certainly. Datomic has had a huge influence on us. I first used it at a startup in 2014 (wit.ai) and enjoyed it.

Datalog and triples were critical for shipping Instant. The datalog syntax was simple enough that we could write a small query engine for the client. Triples were flexible enough to let us support relations. We wrote a bit about how helpful this was in this essay: https://www.instantdb.com/essays/next_firebase#another-appro...

We studied just about all the codebases you mentioned as we built Instant. Fun fact, datascript actually supports our in-memory cache on the server:

https://github.com/instantdb/instant/blob/main/server/src/in...


Definitely waiting for the datalog query to be exposed before I’d use this.

If it was I would never use another database again.

I think the amount of people coming from datascript/datomic who have to work in js and would prefer to use datalog instead of learning a new query language is big.


Noting this feedback, thank you.


This is from me. I didn't realize the connection to Lutris + Enhydra. It should be listed as a "Acquired Company" + "Abandoned Project". Wikipedia also says that it lasted until 2001. Usage is different from development/maintenance. I will update the entry for the old InstantDB and add an entry for this new InstantDB.

I think given that the original InstantDB died over two decades okay and is not widely known/remembered, reusing the name is fine.


Andy, both my co-founder and I watched your Database course on Youtube. We learned a lot, and it's awesome to see your name pop up :)


As a potential dev user this looks really intriguing, hitting all of the main points I was looking for. I build apps in this space, and the open source alternatives I've evaluated are lacking specifically in "live queries" or don't use Postgres. The docs look great too.

In the docs[1]:

> Instant uses a declarative syntax for querying. It's like GraphQL without the configuration.

Would you be interested in elaborating more about this decision/design?

[1] https://www.instantdb.com/docs/instaql


> Would you be interested in elaborating more about this decision/design?

Our initial intuition was to expose a language like SQL in the frontend.

We decided against this approach for 3 reasons:

1. Adding SQL would mean we would have to bundle SQLite, which would add a few hundred kilobytes to a bundle

2. SQL itself has a large spec, and would be difficult to make reactive

3. What's worst: most of the time on the frontend you want to make tree-like queries (users -> posts -> comments). Writing queries that like that is relatively difficult in SQL [1]

We wanted a language that felt intuitive on the frontend. We ended up gravitating towards something like GraphQL. But then, why not use GraphQL itself? Mainly because it's a separate syntax from javascript.

We wanted to use data structures instead of strings when writing apps. Datastructures let you manipulate and build new queries.

For example, if you are making a table with filters, you could manipulate the query to include the filters. [2]

So we thought: what if you could express GraphQL as javascript objects?

``` { users: { posts: { comments: { } } } ```

This made frontend queries intuitive, and you can 'generate' these objects programatically.

For more info about this, we wrote an essay about the initial design journey here: https://www.instantdb.com/essays/next_firebase

[1] We wrote the language choice here: https://www.instantdb.com/essays/next_firebase#language

[2] We programatically generate queries for the Instant Explorer itself: https://github.com/instantdb/instant/blob/main/client/www/li...


The graphql schema / string language is not required. For example, Juniper defines the graphql schema and queries using rust structs and impls: https://graphql-rust.github.io/juniper/types/objects/index.h... and the actual on-the-wire encoding and decoding format can be anything.


This is awesome. I know that a lot of people are looking for something like the Linear sync engine.

I appreciate that you're thinking about relational data and about permissions. I've seen a bunch of sync engine projects that don't have a good story for those things.

imo, the more that you can make the ORM feel like ActiveRecord, the better.


Thank you. We admire ActiveRecord's DSL. I especially like their `validation` helpers, simple error reporting, and the `before` / `after` create hooks.


Very nice!

However for our use case we want total control over the server database. And wanted to store it in normalized tables.

The solution we went for us is streaming the mutation stream (basically the WAL) from/to client and server. And use table stream duality to store them in a table.

Permissions are handled on a table level.

When a client writes it sends a mutation to the servers. Or queues it locally if offline. Writes never conflict: we employ a CRDT “last write wins” policy.

Queries are represented by objects and need to be implemented both in Postgres as wel as SQLLite (if you want offline querying, often we don’t). A query we implement for small tables is: “SELECT *”.

Note that the result set being queried is updated realtime for any mutation coming in.

It’s by default not enforcing relational constraints on the clientside so no rollbacks needed.

However you can set a table in different modes: - online synchronous writes only: allows us to have relational constraints. And to validate the creation against other server only business rules.

The tech stack is Kotlin on client (KMM) and server, websocket for streaming. Kafka for all mutations messaging. And vanilla Postgres for storing.

The nice thing is that we now have a Kafka topic that contains all mutations that we can listen to. For example to send emails or handle other use cases.

For every table you: - create a serializable Kotlin data class - create a Postgres table on the server - implement reading and writing that data, and custom queries

Done: the apps have offline support for reading a single entity and upserts. Querying require to be online if not implemented on the client.


(1) This is awesome. Feels like this wraps enough complexity that it won't just be a toy / for prototyping.

(2) When a schema is provided, is it fully enforced? Is there a way to do migrations?

Migrations are the only remaining challenge I can think of that could screw up this tool long-term unless a good approach gets baked in early. (They're critically important + very often done poorly or not supported.) When you're dealing with a lot of data in a production app, definitely want some means of making schema changes in a safe way. Also important for devex when working on a project with multiple people — need a way to sync migrations across developers.

Stuff like scalability — not worried about that — this tool seems fundamentally possible to scale and your team is smart :) Migrations though... hope you focus on it early if you haven't yet!


Thank you for the kind words!

> When a schema is provided, is it fully enforced?

Right now the schema understands the difference between attributes and references. If you specify uniqueness constraints, they are also enforced. We haven’t supported string / number yet, but are actively working towards it. Once that’s supported, we can unlock sort by queries as well!

> Migrations though... hope you focus on it early if you haven't yet!

We don’t have first class support for migrations yet, but are definitely thinking about it. Currently folks use the admin SDK to write migration scripts.

Question: do you have any favorite systems for migrations?


Nice!

Re: favorite systems for migrations — not really; I've always just kind of not used one, or rolled my own. Desiderata:

* fully atomic (all goes through or none goes through)

* low-boilerplate

* can include execution of arbitrary application code — data-query-only only migrations feel kind of limiting.

* painless to use with multiple developers multiple of which might be writing migrations


That's a great list, thank you! We are thinking along similar lines; looking forward to when we can design this portion. If you have other feedback, please let us know.


This looks fantastic. I want to recommend this to my team. We are a small consulting team building apps for clients. I have a few questions to help me pitch my team and clients better: 1. the usual "vendor locked in". Is there a recommended escape hatch? 2. any big clients on this yet or at what scale do you expect people to start rolling their in house product


Thank you the kind words.

> 1. the usual "vendor locked in". Is there a recommended escape hatch?

Instant is completely open source. We have no private repos, so in the event that you want to run the system yourself, you can fork it.

> 2. any big clients on this yet or at what scale do you expect people to start rolling their in house product

We have startups in production using us today. We would love to learn more about your use case. You can reach out to us directly at founders@instantdb.com


It reminds me of the data half of Meteor, but it looks better thought-out and, obv., not based on Mongo. Nice work.


Thank you. Meteor was definitely an inspiration.


I'm missing clarity about how do I escape Instant DB when I need to, and how to make it part of a larger system.

Say I have an InstantDB app, can I stream events from the instant backend to somewhere else?


> I'm missing clarity about how do I escape Instant DB when I need to, and how to make it part of a larger system.

Instant is completely open source. We have no private repos, so in the event that you want to run the system yourself, you can fork it.

> how to make it part of a larger system.

If you have an existing app, right now I would suggest storing the parts that you want to reactive on Instant.

We're working on a Postgres adapter. This would let you connect an existing database, and use Instant for the real-time sync. If you'd be interested in using this, reach out to us at founders@instantdb.com!


I've just used this to start a bouldering app, so far has been extremely simple, great work.

I'm not sure about how things grow from here in terms of larger aggregates and more complex queries though so am slightly worried I'm painting myself into a corner. Do you have any guides or pointers here? Or key areas people shouldn't use your db?


Glad to hear the experience so far has been good!

> larger aggregates and more complex queries

Currently Instant supports nested queries, pagination, IN, AND, and OR. We have an internal implementation for COUNT [1], but need to update permissions for aggregates.

We're always hacking away on blocker features. If we can't get to it in time and it blocks your app, you can reach to the admin SDK for an escape hatch [2]

[1] The 'admin-only' count starts here: https://github.com/jsventures/instant/blob/main/server/src/i...

[2] https://www.instantdb.com/docs/backend


That sounds excellent. Count is the one I'm mostly after (x total attempts, y successes, z people have done this, etc). Right now it's at worst just inefficient and my scale is very low so that's just not a problem right now. Knowing there's both an escape hatch and that these things are on the plan means I don't feel like I'm setting myself up for issues later.

Btw I guess you created a new repo for the public release, so https://github.com/instantdb/instant/blob/main/server/src/in... is the link for others.

Great to see the open source release, congrats! : )


I'm curious what a bouldering app is? As in climbing, like a route checklist type thing?


Yep. Which climbs have you attempted, at what grades.

My gym uses griptonite, but it's so slow I feel like it'd be quicker overall to create my own app for logging things. I decided to use instantdb as a backend database and it's working nicely so far.


Climber Dreamer here: Former RFID Sensor stuff.

I'd love to see a smart Hold that integrates with an app where the gym cann associate holds/grips in a DB inventory - and pull them out and assign them to a boulder/wall & route. The smart holds have simple pressure sensors for knowing when they are gripped, and for how long. Advanced ones measure force/weight.

Just walk in scan the boulder's code to slurp in all the holds to the local table and then you have them all ping back to the app with every touch. and map your route and exactly telemetry applied using route. So it shows you the wall and the grips you hit as you went up and how much energy and time spend on each node.

Report climbing timings and accomplishemnets on the Gym leaderboard... have challenges - subscribe to a wall/route/buolder and set a threashold to show you when anyone beats your time etc etc

Strava for the actual climb. (The gym could then have a db of all the positions fastener locations in the facility, and then simply assign a hold to a given slot - each hold has a json of its Spec-Sheet for hold type, rating, syle/whatever metrics climbing geeks geek out on, I mean get a grip folks)

Anyway - it would allow for the local climbing app to them maintain a db history of every route and grip and difficulty you've actually touched. Rate the hold-types, routes, wall, boulder - and have that realtime update across al clients subscribed to a wall or gym. A gym can post new routes and have your app subscribers just get the new table for that route and what grips and their profiles each have.)

Then (the not eco friendly version) is epoxy a BLE tag to a routes holds in the wild and you can have the app point at the climb and ping all the BLEs and learn the route - and have ClimbGps - and it can run it locally with a dynamic table of the BLEs it can sense as checkpoints as you pass them (obviously only timing data could be saved unless you do RSSI/something to get vertical and horizontal pathing by triangulating ...

(built these features in previous varying capacities for different things - but climbing would be fun to do it with as well.

I am sure there are already smart grips out there... but this rappelled into my thoughts like Tom Cruise as soon as I read your post.


That would be really cool, but also really expensive - gyms have thousands (tens of thousands for some) of holds and they're usually cleaned by power washing and otherwise constantly abused. Maybe a middle ground would be computer vision? Not nearly as accurate or comprehensive of course, but a lot less hardware. (Most gyms in my experience have comprehensive camera coverage already for insurance purposes so I don't think there would be any additional privacy concerns).


Make a Spline Web App with the routes drawn on the spline doo-dad:

https://old.reddit.com/r/Spline3D/

Can see a simple route design widget in spline that can be a nifty ux for a climbing thing that uses the app ID.

Can you clone an Instant AppID?

I know I am just thinking out loud - but its pretty easy to envision how to use these neato tools and services that have been cropping up recently... Its inspiring.


This would be very cool, my first thought is that this is something that could be added to something like a kilter board. There you already have a high cost setup where holds are normalised across a huge number of installations, integrated apps, etc. Some more premium option for force sensors per hold, I'm sure climbers would go pretty nuts for all the force data given how much people like the same kind of thing just on fingerboards.


I built a =few things in the past that would apply - but now with sensor availability - its far easier than when we built the Barimba Smart Rail for Bars in ~2007

https://www.youtube.com/watch?v=4pTy1TB-2z8

And we did spatial RFID for warehouse (cannabis) tracking with Z-Slot level:

https://www.youtube.com/watch?v=FXkNY0vTATA

https://www.youtube.com/watch?v=c7qvfm6vhF0

But the thing I was focusing on WRT Instant would be the ability to "Clone an AppIS" -- where when you fork an AppID, it dupes the tables into your personal account...

I havent fleshed out the thoughts on this too much because I am not sure if you can easily do that with Instant, as its a hosted thing....

(YC Rejected these) :-)


I am not a climber, but that sounds like a really fun project!


My gym uses TopLogger. It's not super bad, but it could use a nicer design. Are you making an app for indoor gyms or outdoor boulders? Or both? Sounds good with another alternative in the market.


That one looks nice. We'll see where this goes, it's starting as just scratching an itch so is likely to be really pared down compared to other apps. I'm also not a designer, so far the UI is all AI generated and it's very "bootstrappy".

My gym uses griptonite which is fine except there's an ad for their premium thing every time I open it and it's extremely slow. Also either due to the gym or griptonite not all the climbs get added, and not right away. It's frustrating hitting a new grade and then not being able to record it, as then I've got a log saying I have maxed out somewhere I haven't.

The main realisation I had was that griptonite tags are IDs, I don't actually need any griptonite data or to get the gym to sign up for anything to start tracking my own climbs. I can just scan the tags as they are. I'm also making it require a photo of the climb as I'm tired of trying to talk with my wife about which ones we were working on "the pink, next to the green that's orange? With the weird hold? Oh the round one?" and would like just a picture to point at.

I'm planning to do a few things like add live customisable leaderboards, which should be easy with instantdb, mostly for fun. Maybe I'll throw on a few affiliate things on top, otherwise I expect this should be lightweight enough it won't really cost me anything to run.

Also my background is AI stuff and I'd like to try things like using SAM to pull out the specific climb from a messy image, classify holds to say whether climbs look "crimpy" or "juggy", find similar climbs by image (recommend next ones to try) etc.


This is really cool. Curious to see more about how the database can be queried. I don't write much SQL these days, and I have no dedication to Postgres, but it does integrate with pretty much everything. Also curious how I'd go about the basics in Instant.

For example, creating a user table and ensuring that emails are unique - I've done it 50 times with Postgres. Is that part built out yet?

Very cool. Appreciate the "Don't Make Me Think" API.

(written with aqua)


> creating a user table and ensuring that emails are unique - I've done it 50 times with Postgres. Is that part built out yet?

Yes. Auth comes out of the box with Instant. You get support magic code-based auth, Google OAuth, and have an option to extend it using your own backend if you prefer.

P.S Aqua seems very cool; I broke my thumb 6 months ago, and would have definitely appreciated a tool like that.


Is it correct to assume that if your existing application has lots of data stored in standard PostgreSQL tables, you can't have InstantDB sync with it?

In other words it primarily targets brand new projects or projects that can completely migrate away from their current database?


We're currently working on a Postgres adapter. If you have an existing app and would be interested in using Instant with it, please send us a note: founders@instantdb.com

The way the Postgres adapter would work: You give us your database url [1], and Instant handles the real-time sync. [2]

For roll-out, we'll test it ourselves first and then take on beta users.

[1] Encrypted at rest: https://github.com/instantdb/instant/blob/main/server/src/in...

[2] Here's a fun 'pg introspection' function: https://github.com/instantdb/instant/blob/main/server/src/in... . You can take sneak peak through codebase by searching 'byop'.


From skimming through the site, it's not clear to me how the BE looks like. Obviously, the BE part is the hard/interesting part. Is that open-source and/or self deployable? Or is this fixed to a backend-as-a-service you guys provide?


Not related to Instant, but saw that the backend is available on their Github: https://github.com/instantdb/instant/tree/main/server


The backend is completely open source. The code is in the `server` directory:

https://github.com/instantdb/instant/tree/main/server


Sounds conceptually similar to Zero: https://zerosync.dev/

I haven't looked in detail yet — what are the main differences relative to Zero?


Zero is by the Replicache team and they are really nice guys. I've interacted with Aaron Boodman quite a bit in their Discord and he was very responsive and helpful, and took a lot of my comments into consideration in improving the product. It's a very hot space which IMO will revolutionize how we build GUIs IMO. Replicache is a bit hard to grasp at first but the nice thing is you bring your own DB (vs. use a service). I think this project needs more recognition. I would hate to see them fade just because they don't have the fancy backers like YC and the usual crowd. If they keep at it they might prevail since VC-backed startups like InstantDB tend to sell out and flame out.


I think it would be difficult to compare at this point, as there aren't many details about Zero outside of the blog on their landing page. I believe it's very much in early (but active) development.

From that blog: > We are working toward a source release in summer 2024 and an open beta EOY 2024.


Makes sense. FWIW, there are some more details here: https://replicache.notion.site/Introducing-Zero-8ce1b1f184aa...


Hadn't seen this, thanks!


This is awesome. I built a real time whiteboarding app for teachers over 10 years ago on the backbone of the original Firebase service.

It was so fast I was able to build basic collision physics of letter tiles and have their positions sync to multiple clients at once. What a shame to be killed by Google.

I haven't had a need for real time databasing since, but this is inspiring me to build another collaborative app.


Did Google kill it?


I read the whole thing but I fail to understand how does this help or fit into picture of CRUD app. Most app I interact and work for a living are essentially a CRUD, SQL Server and a DOA layer by Spring.

How do I need to start thinking conceptually for this, InstantDB or Firebase concept to kick in?

Say for a collaborative text editor, I'd use off the shelf CRDT Javascript implementation.


Imagine a you want to create a 'todo' list.

If you used classic Rails, you'd be very productive. You could write most of your code on the server, and sprinkle some erb templates.

However, if you want to improve the UX, you generally end up writing more Javascript. Once you do that, things get hairier:

You create REST endpoints, funnel data into stores, normalize them, and denormalize them. Then you write optimistic updates. If you want offline mode, you worry about IndexedDB, and if you want it to multiplayer, you end up with stateful servers.

If you had a database on the client, you wouldn't need to think about stores, selectors, endpoints, or local caches: just write queries. If these queries were multiplayer by default, you wouldn't have to worry about stateful servers. And if your database supported rollback, you'd get optimistic updates for free.

This is the inspiration for Instant: it gives you a 'database' you can use in the browser.

If you're curious, I wrote a more detailed essay about this here:

https://www.instantdb.com/essays/db_browser#client


> If you want offline mode, you worry about IndexedDB

I don't understand the offline mode. If I was to make a single player offline game that runs on the browser, sure, offline mode makes sense and I want to store on client machine.

But in the space of web apps, everything data needs to be synced with server db.

Why would I want to store half of my to do list on client and other half on the server? The end goal is the customer data is stored in the cloud...


So the user could keep working on a plane, when cable, or electricity is out, etc. Wants to experiment without inflicting changes on others, like dvcs. The user may not have “their data on your cloud” as an end goal.

There’s a paper by Kleppman et al about local-first apps that is worth a read.


I'am always skeptical about this offline use case. Is it really as important as the CRDT community claims?

If you are on a plain, you could also just read a book.


It’s about being in control of your data. git folks thought so too, and why not? The paper mentioned is approachable.


I'm a bit naive here, so asking a stupid question. I have an `offline only` app where I read things from the file system (markdown) and currently store them in a redux state for accessing filepaths and ids.

I've been planning to move to indexedDb using dexie for kinda the same use case of easier transaction, not maintaining a huge redux state (16k lines or so), and improved performance.

Now if my app is not supposed to be backed by a online database (single player only, complete offline), would instant make sense for this?

or would indexedDB be the safer choice?


I would not recommend using IndexedDB as your primary storage. This is because browsers can sometimes delete the underlying store (there are various reasons for this, but one is that the user is running out of space on their machines) [1]

If you have access to the file system, I would consider using SQLLite to store everything. If you end up then wanting auth / collaboration, you could try Instant.

[1] This is a good comment that goes deeper https://news.ycombinator.com/item?id=28158407


Ooo nice read! Since I use tauri, mac would have the browser as safari, and since it's a purely offline app, losing that is disastrous!

I'll move over to SQLite in that case, since that can atleast be persisted! Thanks!


This looks great! Is there a way to sync with an api? For instance, my site currently has a rest based api with a non-Postgres backed db but I’d like to add offline, sync, real-time capabilities. Is there an option to sync the updates outside of the Postgres store?


We don't have support a sync api with an external store. Two curious questions: what db do you use, and what would your ideal API look like?


I saw the mention of Google's CEL for authorisation and permission, however would like to know a little about security. Apart from the appId, can I restrict call to db by domain etc. Firebase has protection on such things . somebody should not just take the appId and start calling db.


We don't currently expose the `domain` a request comes from in permissions, but we'd be happy to add that in. I've opened up a ticket here [1].

[1] https://github.com/instantdb/instant/issues/18


having abused a number of firebase databases I can say that the domain restrictions that firebase has don't do anything at all.


What isn’t modern about Firebase and what makes this modern in comparison?


When Firebase was first built, using a document store was a great choice for building a local abstraction that enabled optimistic updates and offline mode. But the lack of relations makes it a real schlep to change your data model when you start adding new features to your app. You end up hand-rolling joins or duplicating your data to avoid complete re-writes. [1]

With Instant, you get a relational Firebase.

[1] https://www.instantdb.com/essays/next_firebase#firebase


One of the first things I've found myself doing with the few firebase realtime database based projects I've made is some kind of abstraction to emulate joins.

I realise that this is probably a symptom of "holding it wrong" and not embracing denormalization, but it was always present and also horrifically inefficient w.r.t to firebases pricing due to the amplifying effect it had on reads - none of those projects would've been practical to take to market without reworking that significantly.

Really what I wanted was a relational database that also did the fun/flashy real time updates without the heavy lifting. At face value it sounds like you're offering exactly what I wanted so I look forward to giving it a try next time!


This seems like a game changer for real time applications. We have been using Firebase mostly for it RTDB and websocket kind of implementation without actually maintaining websocket at the backend. This takes things a step ahead.


The datalog syntax has me curious. It looks like a JavaScript "port" of Datomic's Datalog syntax. Have you considered using other forms of Datalog that are seemingly more compatible with JavaScript? See https://en.wikipedia.org/wiki/Datalog?useskin=vector#Syntax

I wouldn't mind using the Datalog syntax as-is since I have some experience using Clojure with Datomic, but it did surprise that someone would decide to use this syntax over a syntax used in other Datalog engines (and predating Datomic itself).


> but it did surprise that someone would decide to use this syntax over a syntax used in other Datalog engines (and predating Datomic itself).

We are clojure programmers, so our introduction to Datalog was actually though Datomic in 2014. We are fans of other query syntaxes (SparQL looks cool too), but we find Datomic's flavor the most ergonomic for us, and it's an added win for us that we can express queries as plain data structures


Thanks for the answer. I learned that the backend is implemented in Clojure, so presumably de-serializing JSON means client-side queries trivially become Clojure data usable for databases like Datomic, Datascript, etc.

I noticed Postgres is used as the storage layer for Instant's triple store. I'm still curious why making a custom triple store over using e.g. XTDB v1. Like Instant's data store, XTDB is schemaless, and it features a Datalog engine capable of basic query planning. For instance, the order of where clauses doesn't negatively impact performance, unlike Datomic. What were the show stoppers in adopting solution like XTDB?


> What were the show stoppers in adopting solution like XTDB?

#1 would be the lack of a production ready DBaaS (like Aurora offers for Postgres), if I had to guess.


This is super exciting! I was literally just wondering if something like this existed a few days ago (seriously)!

Some super minor feedback appended. All the best with InstantDB!

There is a missing word in the message that appears after clicking on the "Create an app" button:

> With that one-click you’ve claimed an id that you can use for storing your data. Now we'll show you [how] to wire up your db to an app and start adding data. Check out the walkthrough below on your left with the full code and preview on the right.

Also on smaller screen sizes there is no left and right. :)


Thank you! I just updated the tutorial:

1) Added in the 'how'

2) Changed the sentence on mobile to say: 'Check out the walkthrough below, and the full code example right after.'


I have been eyeing Electric SQL lately for developing a local-first app. How does InstantDB compare with Electric SQL?

Side note: Electric SQL is currently going through a rewrite, so things are a bit up in the air.


I have done a little more reading and it appears that one fundamental difference between InstantDB and ElectricSQL is that InstantDB does not sync with standard PostgreSQL tables. ElectricSQL does.

If that understanding is correct, then InstantDB may be a good fit if you are starting your database from scratch or can completely get rid of your existing one.

I have more than 170GBs of data in a SQL database. I can sync parts of it to ElectricSQL using shaped queries.


> standard PostgreSQL tables

We're working on a feature where you bring an existing Postgres database, and use Instant as the sync layer [1]. If you have an app like this and would be interested in beta testing, let us know: founders@instantdb.com

[1] For a peak, you can search for 'byop' in the codebase. Here's a fun introspection query, which definitely awed me about Postgres' powers: https://github.com/instantdb/instant/blob/main/server/src/in...


> ... we would need a schema. But it turns out triple stores don’t need one.

Data always needs a schema (unless it's random bits aka max entropy). The question is just where it's managed and enforced.


How do I handle server-side logic? Let's say I want moderation or rate limiting in the chat app example.


Yes that is a deal breaker if not possible


Exactly. I would really like an answer to this. I don't understand how you can even build something without support for this. They have a "Instant on the server" section in the docs but that's just about querying and writing to the database from an external server. Nothing about middlewares or whatever the solution would be.


Our permission system [1] can work like middleware.

> Let's say I want moderation

I am not 100% sure what you mean by moderation. There could be two ideas:

Moderation 1: 'some people can see all posts, or delete other people's posts'

You can write a rule that allows 'moderators' full access to chat, while, 'users' can only see the channels they blog too, and crud their own messages

Moderation 2: 'I want to validate that some field passes a test -- like 'messages must be under 140 chars''.

You could write a permission rule like `size(data.message) < 140`, which ensures this. We're inspired by ActiveRecord's validations [2], and aim to make validations a lot more ergonomic in the future.

> rate limiting

We don't have built in support for rate limiting, but CEL [2] could handle it. If you have a specific need, let us know and we'd be happy to prioritize it: founders@instantdb.com

[1] https://www.instantdb.com/docs/permissions [2] https://guides.rubyonrails.org/active_record_validations.htm...


Thank you for your response. While permissions sound like they can cover a lot of cases let's say I want to moderate with a list of banned words that are defined in a JSON or fetched from a service.

Ideally I would like exactly what you provide but be able to insert arbitrary server-side code that is run before the change is committed to the database.

I remember having the same "problem" with Firebase back in the day and ended up having to use a Cloud Function that listened on changes and updated records, but it was not a nice experience.


That's a great point Kiro. Down the road, we could look into introducing a more turing complete language, which you could use before a transaction runs. We decided to go with CEL for the sandboxing benefits.


This looks awesome and like something I could leverage on my team. I’m trying to modernize the back end of our chat service (at a large company you’ve heard about), to support real-time instead of polling and modern affordances like typing indicators, read receipts, and reactions.

I’ve built a prototype of a full stack using Flask + SocketIO + SQLite as well as an iOS client to prove the concept to the VPs.

How well do you think this can scale? Any plans to make native SDKs for iOS and Android?


Thank you for the kind words.

> How well do you think this can scale.

We have startups in production using Instant. It could hiccup if you get a big spike of traffic, but all of us will be on deck to fix it.

> Any plans to make native SDKs for iOS and Android?

It's definitely on the roadmap, but we want to get the Javascript / React Native experience really right first.


Interesting that this is Clojure :-)

Clojure + TS seems to be a good way to go, without being hung up on CLJS.


> Clojure + TS seems to be a good way to go

TS as in TypeScript? They're on two very opposite ends of a spectrum, what would you use it for if you're already using Clojure? And what's the "hang up" with CLJS?


I'm dreaming about a Next-like framework that will do React SSR in GraalVM JS engine but will do data fetching, routing and other stuff in Clojure.


I recently started a project with C#/.Net8 with Sveltekit using adapter-static, and it’s been pretty great so far.

Different tech, obvs, but similar spirit. I like the idea of starting with my own monolith with a clear path to breaking out the frontend in the future if we need to scale.


Why clojure (and by proxy Java?). I don't have a problem with either, but it puzzles me quite a bit.

Why not the standard node.js with shared module? Assuming performance is not the primary goal.

Why not generated rust structures from model file and a rust server? Assuming performance is the primary goal.

Why not a jvm with a lightweight runtime? (Assuming instancing is used for scale here, a lot of wasted ram usage here)


Clojure was made with databases and concurrency in mind. We've used it at previous startups, projects, and find it a productive language.

Also the clojure community is amazing. The clojurians slack is one of the most helpful communities for solving hard problems. Stopa will be giving a talk at at the conj later this year [1]

[1] https://2024.clojure-conj.org/#/speakers


How do you prevent the user from uploading a 5gb string to one of the fields?


If the field is indexed, we limit the size to 1 kilobyte. If it isn't indexed, the maximum size is 250 mb


A really exciting product from many years ago was Meteor, which included a realtime database layer on top of Mongo that facilitated many very novel realtime apps.

However, it didn't scale well in terms of performance to large numbers of users.

Would anyone have thoughts on comparisons to Meteor?


Firebase, Parse, and Meteor were definitely inspirations for us.

Instant's main advantage is that we support relations. This means you can create data models like 'users -> comments -> apps'. For a bit more about why this matters, this is a good post:

https://mdp.github.io/2017/10/29/prototyping-in-the-age-of-n...


Is there a plan to make it self-hostable?


You can already self-host! We have instructions on standing up the server on our github [1]

[1] https://github.com/instantdb/instant/tree/main/server


Congrats! BTW, we found a few typos on the site that you might want to fix: https://triplechecker.com/s/yyNfc1/instantdb.com?v=wh8Jr


Thank you, I went ahead and fixed them [1]

[1] https://github.com/instantdb/instant/pull/29


If it's offline where the data is stored? IndexDB?


I saw in the docs that it uses IndexDB. Didn’t read carefully to understand how full is the replica and if it’s possible to make 100% offline app and what would be the limits of storage in the browser.

Also it would be nice to clarify memory consumption.

In general I’m glad such thing now exists!


> if it’s possible to make 100% offline app

You could theoretically make a 'fetch all' query, and replicate a completely offline experience. However, Instant is designed for hybrid use cases.

> what would be the limits of storage in the browser.

The limits are set by IndexedDB, which are a bit esoteric (it depends on how much space the user has available on their hard drive). It could reach into the GBs, but then the browser can sometimes choose to delete it. This comment goes more into it: https://news.ycombinator.com/item?id=28158407


Yes, queries are cached locally on IndexedDB


This looks great. We use our own version of something much more naive which allows for the various benefits you have (but yours does more). Ours is also based on Linear but we go all in on mobx like they do too. It’s a great model where we have optimistic updates and a natural object graph to work with in typescript. I’ll have a play with this to see if it could eventually be used as a replacement.

Noticed in your docs you say that Hasura uses RLS for permissions but that’s not true. They have their own language for effectively specifying the filters to apply on a query. It’s a design decisions that allows them to execute the same query for all connected clients at the same time using different parameters for each one.


I didn’t clock the use of “triples” as the store on first read. That’s a non-starter for existing dbs and pretty much a dead end for anyone who eventually wants a db structure they can use outside of this model.


We are working on a Postgres adapter. If you have an existing app and would be interested in using Instant with it, please send us a note: founders@instantdb.com

The way the Postgres adapter would work: You give us your database url [1], and Instant handles the real-time sync. [2]

For roll-out, we'll test it ourselves first and then take on beta users.

[1] Encrypted at rest: https://github.com/instantdb/instant/blob/main/server/src/in...

[2] Here's a fun 'pg introspection' function: https://github.com/instantdb/instant/blob/main/server/src/in... . You can take sneak peak through codebase by searching 'byop'.



Is this similar to CouchDB/PouchDB?

Can the backend be replaced?


Ah, sorry.

I thought this was a firebase alternative in the sense that it's an open source library that works with Postgres, but it's a cloud storage service with a nifty frontend.


I initially thought that as well but see their comment at https://news.ycombinator.com/item?id=41325384

The github repo includes the server


what would you say are pros/cons vs. supabase?


We provide support for optimistic updates and offline mode out of the box. Our idea is to give you the best of both worlds in terms of Firebase and Supabase. [1]

[1] https://www.instantdb.com/essays/next_firebase#the-missing-c...


How about (inspired by another HN post) - a rebuild of a TUI for email, given how its built:

https://blog.sergeantbiggs.net/posts/aerc-a-well-crafted-tui...

https://aerc-mail.org/

It seems that building a version of this Aerc Email TUi with Instant is a completely doable?

Might be an interesting tutorial to build out an Instant FroBenDB (Instant is an instant Front-BackendDB :-) --- btu the txtual nature of aerc and its configs seem ripe for just bolting it to Instant.


That strongly reminds me of Meteor. It's crazy to think how many modern problems were solved by design there years ago.

I wish you the best of luck, having a real-time database on the client does make things much easier.


I mean, Meteor did a lot of these things, and they worked fine for toy examples. But if you started to do something of any scale meteor choked.

To be fair, maybe it was unrealistic to expect Meteor to do real-time communication for my game.


Please make a flutter SDK


Thanks for the request! Made an issue to track this here! [1]

[1] https://github.com/instantdb/instant/issues/19


Is it fully self-hostable? A lot of the time you have to hunt down small clauses in tools like these that state that some essential part of it can't be self-hosted.


We open sourced all parts of Instant with this repo. There are no other private repos. The backend is written as a multi-tenant system, but you could run it with a single 'app'. If you try to set it up and have any issues, please let us know.


Is Firebase not modern anymore?

I think this looks like, a backend-end-in-box type of product? So that you just have to focus on front end mostly?

Could be cool for early stage projects.


Congrats to the Instant team. It’s a fantastic project. The DX is great and the engineering behind the datalog engine is really impressive.


Thank you! Awesome work with pglite as well, very exciting tech!


Is it different than CRDTs?


> we tail postgres’ WAL to detect novelty and use last-write-win semantics to handle conflicts

can you elaborate more on how you achieve this


We use the concept of 'topics'. A topic encodes: 'The part of the index a query cares about'.

For example, given a query like "fetch user where id = 1", there could be a topic like "users:id:1" [1]

When a WAL record comes in, we find queries by their topics, and invalidate them. This triggers a refresh.

This is inspired by Figma's LiveGraph [2], which in turn is inspired by Asana's Luna [3]. The essays cover the idea in more detail, but you can also start diving into the code, at invalidator.clj [4]

[1] This is an example. Inside Instant, we encode them as a 'datalog' pattern. Something like `[:ea "1" :users/id]` [2] https://blog.asana.com/2020/09/worldstore-distributed-cachin... [3] https://www.figma.com/blog/livegraph-real-time-data-fetching... [4] https://github.com/instantdb/instant/blob/main/server/src/in...


I worked on LiveGraph for a long time at Figma. We went through our own evolution:

1. first we would use the WAL records to invalidate queries that could be affected (with optimizations for fast matching) and requery the data

2. then we used the info from the WAL record to update the query in-memory without asking the DB for the new result, it worked for majority of the queries that can be reliably modeled outside of the DB

3. I believe after I left the team reverted to the re-query approach, as managing a system that replicates the DB behavior was not something they were excited to maintain, and as the DB layer got scaled out, extra DB queries were less of a downside


LiveGraph was an inspiration for us Slava, made me smile to see your comment. 2. is _really_ interesting.

I'll reach out to you on twitter; would love to learn more about your experience


Seems comparable to https://pocketbase.io/


I thought the same when reading the headline, but the approaches are quite different.

Pocketbase (at a very high level) takes sqlite and layers on a go ORM and a relatively loose (on the scale from typical rest/http on one end to graphql on the other) autogenerated REST api, a grammar for expressing authz constraints, websocket updates and an embedded js engine for registering handlers for the various crud model and webserver events.

It's _very_ well designed, but I think the approach taken here with datalog and triples on top of postgres is worth a close look as the trade-offs will be very different, though it does seem like a lot of layers


Even though the two may not be fully comparable, I would like to give another vote to pocketbase, simply because I've found the experience of working with it to be absolutely stellar.


Very cool! How does this compare to Supabase?


We provide support for optimistic updates and offline mode out of the box. Without these it's a real schlep to build Linear-level applications. [1]

[1] https://www.instantdb.com/essays/next_firebase#supabase-hasu...


This has been tried with Supabase using electricsql https://supabase.com/partners/integrations/electricsql

Interestingly the team at electricsql are now rewriting their solution because it didn’t scale and was too complex https://next.electric-sql.com/about


Electric SQL had a different design. Instant is more inspired by systems like LiveGraph, which power apps as big as Figma. LiveGraph was itself inspired by Luna, which powers Asana.


Why would I use this over Yjs or Automerge?


Why would I use this over Yjs or Automerge?

Yjs is great if you sharing a single data structure, like a document. It doesn't work as well if you are sharing relational data, like 'documents for a workspace'.

We are thinking about supporting Yjs for document editing inside Instant


I'm using Hasura connected to a postgres DB at the moment. What you have built sounds great.

Hasura offer a self hosted solution so that I know if they decide to close shop for whatever reason, I'm not stuck in the lurch and have to reengineer my entire solution. Do you offer this now or are you planning to?


Thank you!

And yes you can self host today. We have instructions on standing up a server on our github [1]

[1] https://github.com/instantdb/instant/tree/main/server


this could be very useful for real-time, collaborative apps. Looking forward to trying in out on my next project


Looks great! How is the migration story when schema changes are needed? How do you deal with old clients?


> Looks great! How is the migration story when schema changes are needed?

We have an admin API you can use to write migration scripts. The process is a bit manual though, and a more integrated solution is on the roadmap

> How do you deal with old clients?

Instant treats the the backend the source of truth. If there's an inconsistent cache, we drop the cache and fill it from scratch. We tend to write code that's backwards compatible, and suggest the GraphQL ethos: make sure when changing schema that active clients won't break


Asking the real questions.

Also, how does backward compatibility work? An offline first app might use a stale version of the code sometimes right?


Cute commit message...


These ideas are cool but I wonder how security works. Do you do like rate limiting and stuff like that?


We built a permission system on top of Google's CEL [1]. Every object returned in a query is filtered by a 'view' rule. Similarly, every modification of an object goes through a 'create/update/delete' rule.

You can learn more about the rules language in the permission docs: https://www.instantdb.com/docs/permissions

[1] https://github.com/google/cel-java


Just replaced Firebase with this in my personal notes app. Also removed redux along the way :) This is great!!


Hey this is great. But this should be pay per pricing model instead of $30/month upfront. I don't think with this pricing model it can be compete with cloudflare suite of products like durable objects, kv etc.


We offer a generous free tier which doesn't limit your number of projects, never pauses, and available for commercial use. The pro plan is $30/mo and then pay for usage.


In case it's helpful for anyone, I did a little write-up of my experience using Instant a couple months ago to hack together a simple weekend project: https://www.alexreichert.com/blog/ceramics-with-instantdb

tl;dr -- I'm a big fan :)


Tailing the WAL is an interesting approach. How do you handle the potential increased load on the database from constant WAL reads?


We use PG replication slots and listen to updates. This doesn't add much load to the database -- it's similar to having a read replica. Adding more servers would mean more replication slots, and could slow down PG. When this happens, we'll likely replicate PG's WAL onto Something like Kafa. This is what LiveGraph does [1]

[1] https://www.figma.com/blog/livegraph-real-time-data-fetching...


Congrats on the launch! It looks like all your examples are React (or Vanilla JS with a minimal implementation of reactivity).

Would you be able to add examples for Vue JS?


I saw the `db.useQuery` function, quite good for people who are familiar with react-query, but is there a `useMutation` equivlent? It seems that `db.transact` does not return anything stateful.


`db.transact` returns a promise. It 'resolves' when the transaction is guaranteed by the server, or if you are offline, if it's been enqueued. I'll note to include some information about this in the docs, thank you.


This looks awesome. Maybe a stupid question: appID is public, does it mean anyone can query everyone else's database if they know the appID?


Is this a drop-in, same-client-sdk alternative to firebase?

It seems like that’s what would do best in the marketplace… people seem to be fine with the API of firebase and just want it to be cheaper


We love the small API and fast getting started experience of Firebase. We take a lot of inspiration from them for our write api. Hand-rolling joins was often a pain point though and we thought a graphql-like interface was a better experience.


This looks very cool!

I'm slightly worried about permissions evaluating to "true" if they're not specified. I think this will lead to a lot of actions being accidentally allowed.


Thank you!

We wanted to lean towards a faster getting started experience. We are thinking about options to change the default, or introducing a kind of 'wildcard' in permissions.


I‘m wondering how this compares to convex (https://www.convex.dev/)


Both convex and Instant let you build apps quickly without worrying about the backend. Where Instant differs, is that queries and transactions can run on the client: you get optimistic updates and offline mode by default. On the other hand, convex lets you define queries as functions that can run on the edge. I haven't used convex deeply, but this is my understanding.


Thank you so much! I’m a first time builder of a bigger CRUD app. While I’m happy to build it with traditional methods the first time (REST API, SSE, auth etc.) I would love to use offers like Instant or Convex in my next projects.

Both look really promising in my opinion.

(Edited for typos)


Any plans to support other backends besides javascript?


We have an unofficial HTTP api, will add it to the docs but we've already had folks use it for non-js backends! [1]

[1] https://paper.dropbox.com/doc/Unofficial-Admin-HTTP-API--CVa...


Do you have clients for Android, iOS, Mac apps, Flutter, Rust? If not, how hard do you think it is to implement a client for an additional language?


The core client SDK is pretty small -- it's about 10K LOC. We want to get the core abstractions right in JS-based environments, than expand. We really like Mitchel Hashimoto's post about how he built Ghostty on different platforms [1]

[1] https://mitchellh.com/writing/zig-and-swiftui


So if I understand it right one option you seriously consider is... after the JS client feels right, you write a "core" language in C/Rust/Zig, then "wrap" it with Swift, Kotlin, and Dart?


Yes. for more context:

Right now because our client SDKs are in JS, we can share much of the logic between React Native, React, and vanilla JS.

If we were to add native client SDKs, we'd have to duplicate the logic across different languages. One alternative path could be to write some shared code in a core language.

This idea is not set in stone at all though, I just enjoyed Mitchell's experience report.


This is awesome, and in a way the reverse of HTMX :)


Yes, our bet with Instant is that browsers have become so powerful, that we can run many computations locally, and don't need to wait for the server all the time.


Maybe they're aiming for different use cases, but for modern web app development I still prefer EdgeDB.


Their icon looks like fig.io but inverted


This gave me a good chuckle :)


I'm curious if something like this would be good for multiplayer games?


We have basic example demonstrating how you can use Instant to make games [1] and a live iOS app built with Instant and React Native [2]

[1] https://www.instantdb.com/examples?#8-merge-tile-game

[2] https://github.com/jsventures/stroopwafel


Kind of like Horizon + RethinkDB? which sadly disappeared out of nowhere


RethinkDB is an certainly inspiration for us! We thought their and design was very elegant.


This looks awesome! Do you have any numbers around end-to-end latency?


A "modern" Firebase? Firebase is only ~10 years old...


How is it better than firebase or Superbase ?


We wanted to bring the best of both worlds by packaging optimistic updates and offline mode with the power of relations [1]

[1] https://www.instantdb.com/essays/next_firebase#the-missing-c...


The third paragraph in the link says the following

> Currently we have SDKs for Javascript, React, and React Native.


How is async storage limit is handled ?


We designed Instant so that the client store works as a cache: it only stores a partial amount of data: just the queries you need to load the page. The server is responsible for storing all the data.


How does it compare with Liveblocks?


Both Instant and Liveblocks support ephemeral data: like cursors, activity indicators, and presence.

The difference comes to storing data: Liveblocks persists data based on 'room'. Instant supports full relations, so you can create a query like: "listen for items in these 3 rooms"


Databases accessed directly from javascript in the browser, may change yet again how multi user webapps is being written. Looking forward to playing with InstantDb.


you could call it freebase, because this is very stimulating


best thing I ever did was move away from firebase


Would like to try this out if it had a Flutter/Dart SDK


This is on our radar and made an issue for tracking and visibility [1]

[1] https://github.com/instantdb/instant/issues/19


is the server side open source too?


Yes, the server is open sourced here: https://github.com/instantdb/instant/tree/main/server


> “Facebook and Airbnb“

I know it's "just a paycheck," but I despise both of these companies, so I'll probably pass on this.

The usage of "schlep" so freely is also a bit unsettling.

Good luck!


This looks promising! Does any LLM understand Instantdb yet?


Thanks but no thanks. Firebase is already modern.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: