Cool! Y’all, this is obviously an RPC interface, but the point is that `import { onNewTodo } from './CreateTodo.telefunc.js'` is not, as written, an RPC. That transform, with seamless type support et cetera, is what makes this interesting. (If you think it’s interesting.)
I think it’s interesting; I experimented with a higher-magic version of this a couple of years ago. (It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions, which was really fun when it was working.
My experience was that for the simplest use cases, the ergonomics were unbeatable. Just drop a `@remote` tag into your TodoMVC frontend, and there’s your TodoMVC backend. But as soon as I had to look under the hood (for something as simple as caching behavior), the “just a function” interface was a distraction and I found myself wishing for a more conventional RPC interface.
(I think that might be why you tend to see this sort of magic more in the context of a full framework (such as Meteor), where all the typical use cases can be answered robustly. If you’re building your thing out of modules, cleverer abstractions usually mean more work overall to integrate everything and boring is better.)
But it has a section titled "Network Transparency" that talks about when transparency of fails. And he calls out three things that make network calls fundamentally different:
1. Availability. It's possible that your network "function" will be impossible to call for some amount of time.
2. Latency. The round-trip latency of RPC calls is extremely high, which affects API design in major ways. Typically, you wind up wanting to perform multiple queries or multiple operations per round-trip.
3. Reliability. Network calls may fail in many more ways than local calls.
And these are all issues that have defeated many "transparent" RPC systems in the past. Once your application spreads across multiple machines that interact, then you need to accept you're building a distributed system. And "REST versus RPC" is relatively unimportant compared to the other issues you'll face.
To JS developers' credit, writing modern JS in many ways already is like writing a distributed system.
Even if you stay fully client-side, you have bits of code that are running in different parts of the browser and you have to marshal data back and forth between them. Many web API functions are really RPC calls into a different browser process.
JS has has primitives like channels, promises or async/await to deal with those.
At least there might be some ways how you could use those as as building blocks for an RPC system which lets you handle the three problems better.
The client-server relationship is distributed but the intra-browser stuff is just async IPC. Writing robust clients against remote APIs requires considerations around retries, failover, idempotency and read/write consistency. It seems to me that distributed patterns in pure client code are thin on the ground and limited to things like CQRS and Sagas.
Web RPC protocols are not trying to replace local function calls though. They're trying to replace remote API calls (REST, GraphQL etc.). And all remote protocols have to solve problems like availability, latency, and reliability. Those problems remain. This is trying to solve a different problem with different trade-offs.
The problem it solves is unnecessary protocol noise and the trade-off is tight coupling between frontend and backend code. I think it's obvious that the trade-off makes sense for small projects and MVPs: most changes require touching both ends and typically the team isn't differentiated between backend and frontend anyway.
But even with large projects that have to cater to many clients, a tightly-coupled RPC system can solve problems like overfetching and underfetching as a straightforward method for implementing the BFF pattern[1].
Telefunc is about replacing generic endpoints (RESTful/GraphQL APIs) with tailored endpoints (RPC).
That's why the trade-off is simpler architecture (generic endpoints are often an unnecessary abstraction) VS decoupling (tailored endpoints require the frontend and backend to be deployed in-sync).
In the long term I foresee RPC to be also used for very large teams. I've ideas around this, stay tuned. I can even see companies like Netflix ditching GraphQL for RPC, although this won't happen any time soon.
In the meantime, RPC is a natural fit for small/medium sized teams that want to ship today instead of spending weeks setting up GraphQL.
Every generation keeps trying rpc and learns its lesson…eventually.
On windows it was DCom, then COM+ then remoting the WCF then who knows I lost track.
Rest APIs are simple and easily debuggable, magic remote api layers are not.
That’s why REST is still prevalent even in websockets had better better performance parameters (and in my testing it did have performance advantages) yet 7 years after my testing hire many sites are running websockets?
For lots of APIs this is somewhat true. However, I recently took a deep dive into "REST" and realized that for many APIs, you really have to contort how you think in order to make it fit into that model.
It generally works for retrieving data (with caveats...), but I found that when modifying data on the server, with restricted transformations that don't necessarily map 1-1 with the data on the server, it feels like forcing a square peg into a round hole. I tend to think of verbs in that case, which starts to look like RPC.
("REST" is in quotes because the REST model proposed by Fielding (ie, with HATEOAS) looks almost nothing like "REST" in practice).
> If Telefunc is bug-free (which I intend to be the case), then you won't need to debug Telefunc itself.
And if no one crashes, you don't need airbags.
If I'm being blunt, reality doesn't give a shit what you think. It's better to design with the assumption there are bugs so that _WHEN_ it happens, the users are up a creek without a paddle.
These sorts of implicit assumptions are how painful software is made.
No, but somebody created type checking, linting and testing.
Not sure what's up with the CS people and their halting problem. In the industry we've solved (as in developed ways to deal with) the problem of verification decades ago.
Also, debuggers. Nobody said the verification can't be done by a human.
verifying software is correct implies solving the halting problem.
What you mean is "no known bugs", so may be use those words instead. "verification of correctness" has a specific meaning in our industry.
yeah yeah, I get it, those stupid CS people and their P=NP talk. Don't they know you can obviously verify correctness without verifying it for all possible inputs? What next, you can't prove a negative such as absence of bugs?!?
> verifying software is correct implies solving the halting problem.
No, producing a program that can verify that all correct programs are correct implies solving the halting problem.
Verifying a particular piece of software is correct just implies you've proved that one piece correct. (And probably wasted your time dicking around with it only to find that the actual issue was in software you treated as 'outside' of the software you were verifying...)
What you're describing is the programming version of approximation. It's understood that it has a margin of error due to the approximation.
What you're claiming here is that if you check enough inputs you've proven it correct, and what I'm telling you is that's not the case.
The fact is, nothing has been proven, the wording on the webpage itself is more honest (no known bugs and a suite of automated tests).
To verify a program is correct is a much stronger claim than what is even on the website. And that requires restricting the input or solving the halting problem.
Generic endpoits are a design smell. Junior devs making Junior dev problems because someone who's been coding for 10 minutes wrote a series on api design on Medium.
Interestingly there was some research around the latency issue, trying to automatically rewrite the client code so that remote calls are batched together. (This was done in Java where rewriting the .class file was tractable): https://www.doc.ic.ac.uk/~phjk/Publications/CommunicationRes...
Availability and reliability can be handled similarly by thinking about classes of failure modes. Most RPC frameworks have error codes that are too fine grained (see HTTP and it’s cousin REST).
Latency is actually mostly solvable with something like cap’n’proto that pipelines the promises so that you can have a normal looking composable functions without the latency cost.
Joel Spolsky advice is usually fine but it’s important to understand his commentary tends to be about what ideas are in vogue / dominant at the time of the post and tend to have a more limited perspective (vs listening to researchers in the field actively investigating the problems and investing in ways to find solutions).
For example, we have an explicit class that manages RPC calls so that at the call site it looks like a regular function. But since it’s I/O it returns an async so it makes it clear that it’s I/O (which you have to do with non NodeJS JavaScript). Additionally, it returns a promise of a Result object type just like any other failable call which you have to handle properly. Works pretty well (although in retrospect I’m now wishing we had used structured clone instead of JSON to send messages around).
Sorry, but this post just shows a lack of experience in dealing with distributed systems. Joel was bang on the money about the problems of disguising network interactions in applications.
Async vs Sync is not enough. Network partitions have multiple failure modes depending on a range of factors, but one of the most pernicious is the “undetectable cloud break”, an invisible break between two routers. Having a well defined “api” that is the clear “we are communicating over the network here” makes it much easier to reason about and handle these issues.
There is an enormous difference between “this I/O call failed” and “this I/O call took 10 minutes to fail”.
vlovich123 built Cloudflare's R2, I think he has some experience with distributed systems.
The RPC debate is confused because the two sides of the debate have different ideas of what RPC is. The anti-RPC crowd is arguing about RPC systems from the 80's, which tried to make RPC look exactly like a local function call, including blocking until the call was complete. The pro-RPC crowd is mostly speaking from experience using modern RPC systems which do not do this; they use asynchronous callback APIs, have some mechanism for reporting network errors, etc. Modern RPC systems are called RPC because they encourage you to organize your network API conceptually into something like function calls, but they certainly do try to hide the fact that there is a network here that can fail in all the ways networks fail.
I don't see why automatically converting functions to RPC APIs is incompatible with a clear "we are communicating over the network". Just put all the RPC functions in their own `network` namespace or whatever.
One counterpoint to that is that this works very well in Erlang (and Elixir) and has done so for 30+ year. Well, I might be cheating a little bit with my definition of "so" :)
I'll agree with you that it's not a trivial change, far from it. You need switching your mindset from local to remote, from procedures to agent messages, from exceptions to non-local error-handling, but it remains quite simple and readable.
I feel that Telefunc is bringing developers a good chunk of the way towards that goal. And, who know, maybe there is another simple model that is not exactly agents, and that would also work nicely.
> Most RPC frameworks have error codes that are too fine grained (see HTTP and it’s cousin REST).
Are HTTP and its cousin REST examples of RPC frameworks? I don’t think so, and I may have misunderstood your point. But if you do consider them to be RPC, can you explain your reasoning?
Despite drawbacks here and there, I do have appreciation for RPC, e.g. JSON-RPC is widely used in the distributed/decentralized tech space I work in. It’s fairly well-specified and well-suited for a variety of use cases:
The distinction that it has to look like a normal function call is unhelpful. To me RPC and IPC are synonymous (which I get isn’t the Wikipedia definition). I lump them all together as “distributed communication”. The actual problems I think about are:
* semantics of what the request / response looks like and how failures communicate
* how are the APIs composed? Not the local function call, but the functionality calling the API implies (see cap’n’proto’s page on composable remote APIs - it’s really brilliant).
* can you communicate across programming languages?
* how is ownership of remote resources managed?
* how is routing / dispatch of API calls managed?
For example “fetch” in JS land looks a hell of a lot to me like a normal function call to me. And I certainly then wrapped it with easier to read APIs that looked more normal and handled routing and error handling for me. The main difference with HTTP is that RPC systems traditionally autogenerate a bunch of code for you. But in HTTP land you now have efforts like OpenAPI to do similar things. So HTTP isn’t RPC but the autogenerated code from a description of the API makes it RPC? That gets you into the ship of Theseus paradox which isn’t helpful. At what amount of wrapping it with local functional calls does distributed API calls transform into RPC?
To me it turns out that async / sync is a key distinction and the original sin of RPC in the 80s. It’s also why I view the coloring problem as a good thing (at least given where the tech stack is today). Making functions that may be doing I/O stand out and needing thought is a good thing. The problem with many RPC/IPC mechanisms from the 80s is they tried to make the code seem synchronous not that it looks like a function call. But I haven’t seen any RPC systems where async vs sync is the distinguishing characteristic (eg cap’n’proto and fetch are both async).
To me, HTTP, REST, SOAP, COM, gRPC, cap’n’proto, JSON-RPC, XPC, Mach etc all try to solve the same problem of distributed communication and processing, just by making different trade offs.
HTTP tried to standardize RPC semantics but you can clearly see it’s optimized particularly around static document retrieval and typical browser tasks. I’ll buy that static document retrieval need not be classified as a remote API but there’s a lot of non static document retrieval that’s layered on the same semantics that is proper API calls (notably REST) tries to get everything to conform to that so that middleware has some convention and you get browser interop. It works surprisingly well because a good majority of apps are CRUD and don’t need anything more.
JSON-RPC tries to solve routing/dispatch to be more automated if I recall correctly because REST is such a cluster (look at all the routing libraries which to me are an anti pattern). + JSON-RPC is particularly easy in JS which is usually the language on one side of the transaction.
COM solves all of the above and things like distributed ownership of resources, except it historically didn’t have a good security model (I don’t know it well enough to know what’s improved there but I don’t assume it’s stayed static since the 90s even if it hasn’t found traction outside Microsoft).
They all try to solve different parts of the problem or have different sets of tradeoffs, but to me it’s fundamentally the same problem. It’s the same reason I don’t view distributed systems as something that requires different machines, CPUs, processes or anything about geography. You can have distributed systems within one process, you can have it within processes on the same CPU, you can have it between CPUs on the same device, etc. Heck, to me even the kernel and user space form a distributed system with syscalls as a primitive RPC mechanism. JNI is a form of RPC too although you’ve seen traditional systems like gRPC supplant it even within process because it moves complexity out of the JNI layer into a simpler system with easier to follow rules.
Now you can sometimes makes simplifying assumptions to reduce complexity, reduce power usage etc. and not all distributed systems necessarily have the same sets of problems. But fundamentally to me it’s all the same problem space and why I don’t distinguish the terminology so much.
Yes, and that's why in Telefunc's world remote functions are called "telefunctions" to differentiate them from normal functions.
Telefunc doesn't try to completely abstract away the network: as you say, it's not possible to abstract away network latency and network failure. Telefunc is about creating network endpoints as easily as creating functions.
> My experience was that for the simplest use cases, the ergonomics were unbeatable.
Exactly.
> the “just a function” interface was a distraction and I found myself wishing for a more conventional RPC interface.
For real-time use cases, I agree. I believe functions are the wrong abstraction here for real-time. I do plan to support real-time but using a "it's just variables" abstraction instead, see https://github.com/brillout/telefunc/issues/36.
> cleverer abstractions usually mean more work overall to integrate everything and boring is better.
Yes, Telefunc's downside is that it requires a transformer. The Vite and Webpack one are reliable and used in production (using `es-module-lexer` under the hood). The hope here is that, as Telefunc's popularity grows, more and more stacks are reliably supported. Also, there is a prototype using Telefunc without a transformer at all, which works but needs polishing.
> I experimented with a higher-magic version of this a couple of years ago.
> I think it’s interesting; I experimented with a higher-magic version of this a couple of years ago. (It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions, which was really fun when it was working.
That sounds familiar! We were doing all of that for Opalang arount 2009-2010 if I recall my calendar correctly. Were you there?
>It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions
Are you me? I made this as a fun project a while ago, and left it there gathering dust, but earlier this year it found new life in an AI application and it's delivering promising results.
There's not much new under the sun. Virtual machines were created in the 1960s, for example.
The wisdom I was given by an old hand is that the computer industry goes through a cycle of focus: disk, network, CPU - at any given time the currently 'hot' technology is targeting one of these hot spots.
I am not sure I have lived long enough to see this play out for myself but it seemed that there was some truth to it when I first heard it.
You're absolutely correct. Had this moment w/ my dad, when I started playing around with early VMware and excitedly told him about it and he was like "Well, yeah we use that all the time at work. It's called VM/370" (current: https://en.wikipedia.org/wiki/Z/VM but see list of historical versions)
What I just really dislike is all the fanfare and marketing blah. It seems that we can't have a proper conversation about such things. Performance and bottlenecks are always the same. Something is your current bottleneck. You focus on getting rid of that bottleneck and you discover the next bottleneck. There is always one. People will always take advantage of the bottleneck removed and hit the next one. Can't we just openly talk about the fact that we've removed one and the new bottleneck is of a different 'type' (like you say, disk, network, CPU). Do we always have to pretend like something truly new has been found? Can't we acknowledge what we're really doing there?
Different thing, same vein: The recent posts on here about finite automata (aka state machines). These are the bread and butter of computer science. You learn about those in every computer science 101. And in 101.2 you learn about pushdown automata because finite automata can't do everything you might need (i.e. they can't count). Yet the discussion here and the linked articles and frameworks linked to sound like they're completely novel.
It might not be the same. You see, context and surrounding technolgy changes rapidlly and so ideas that were not feasible 50 years ago (or had some other type of problem) might be feasible today.
History can't repeat because underlying technology always moves forward and that change is so important that it invalidates almost anything to be clasified as total repete.
I don't claim that it's a total repeat. I say I don't like the fact that the opposite is often claimed, i.e. that something is "totally new". Emphasis on "totally". Can you give specifics?
You are right in that technology changes and different things that were not possible become possible or become mainstream accessible. What I don't like is the complete ignorance of what came before because of marketing. Don't get me wrong, I get it in that sense. It only sells well when you make everyone believe it's the best thing since sliced bread and that you just invented it out of thin air. I don't have to like that though.
We start out with sequential reads being faster for example. Think tape drives and how file formats and search algorithms got optimized for sequential reads being faster. Then hard drives come along where you can suddenly do much faster 'random reads' simply because searching is so much faster in comparison and people take advantage of it. But then you build enough functionality on top of that sudden ability to do random reads relatively fast that it does become the bottleneck and you instead organize things on your HDD such that they result in sequential reads instead. Along come SSDs, which are so much faster overall that you don't have to any longer. Until it repeats and your workloads bottleneck on even an SSDs abilities. But the idea of organizing your files in a certain way or taking advantage of the read characteristics of your storage aren't completely novel at all.
Or take databases. We start out with accessing data by key only, because it's faster. You need to know your access pattern beforehand if you want things to be fast. Someone comes along and invents relational databases and it becomes possible to actually do that in a 'fast enough' way as well because of hardware advances. Suddenly someone else comes along way later and re-invents accessing data by key only if you need 'real speed' (aka NoSQL databases).
Or like in another thread where someone claimed 'the cloud' invented node independent re-attachable networked storage. No they didn't. I'm not sure if that's the earliest but it's the earliest I know of, which are EMC^2 Symmetrix and that was way pre-cloud i.e. in the 1990s. SANs are/were a thing for a long time. Mainstream access to such technologies by simply renting an EC2 instance w/ EBS volumes is something the cloud did give us though.
You mean FSMs? I don't really have any links about those, no. I learned about them in university. That's been a while and all of that info was in paper form. But I'm sure there are great online resources nowadays.
Just in general I do see in my history this which came through HN a while ago. I never read any of that but it seems a lot like what we did in computer science in uni from a cursory glance: http://infolab.stanford.edu/~ullman/focs.html
> the computer industry goes through a cycle of focus: disk, network, CPU - at any given time the currently 'hot' technology is targeting one of these hot spots
SSR makes me think we’re going back to doing everything on the server, after quite a few years focusing on the client. I think this might be the third go-around in my career.
Network, and to a lesser extent: CPU (or more generally, compute) in large scale deployments as a result of possibilities enabled by networking. We’re calling this the cloud, but the first iteration of “the cloud” started with storage enabling efficient, reliable access of data storage being decoupled from individual nodes. Now we take that for granted- so maybe the current era is “all of the above, but storage started to feel like a solved problem”
We’re calling this the cloud, but the first iteration of “the cloud” started with storage enabling efficient, reliable access of data storage being decoupled from individual nodes
This is exactly what I dislike about all this "look, shiny, new" kind of thing. No, networked storage that is decoupled from compute nodes is not a cloud thing. Sure the cloud does that too. But data centers all over the globe used EMC^2 Symmetrix for a long time: https://en.wikipedia.org/wiki/EMC_Symmetrix
I wish every CS curriculum had a mandatory course on the history of CS.
Covering how many times the same ideas have been "new" and then discarded as not useful, only to be rediscovered. It could help reduce the amount of churn in the industry, so we can move forward instead of forgetting and relearning the same things over and over.
It's after the year 2000 - definitely a joke on Bret's part. He is very big on keeping computer history alive and contextualizing things.
You're meant to pretend Bret is giving the presentation in 1973; e.g. his comment about "but there's no way Intel will corner the market and stifle innovation, right?"
Issue with RPC was that network calls were masked like they were ordinary calls. But they're not ordinary calls, they might take a long time to respond, they might fail because of network issues, you could send request and then fail to receive response. Those kinds of situations are not common with ordinary calls, so it creates some kind of impedance mismatch.
For example it's not possible to call JS function, it'll work and they it'll fail to return a result for some reason. Worst thing that could happen is stack overflow but it won't let you call this function.
I think that RPC is fine as long as it's clearly separated and those who read and write code do not mask it behind ordinary innocent looking calls. It's OK to call personApi.getFullName(person.fistName, person.lastName). It's not OK to call person.getFullName().
Async functions, now available in many languages, solves many of the issues RPC had in its childhood. With the problem that you get "colored" functions. Thinking about it that's probably a good thing, because with RPC you had colored functions, except you didn't know about it, some innocent functions would just randomly take forever to return.
> Async functions, now available in many languages, solves many of the issues RPC had in its childhood.
Like which problems?
Transparency? Nope; the caller still needs to set up where the remote function resides, which of the multiple remote functions with that name must be executed, etc.
Latency? It actually makes it harder - you now need separate plumbing so that calls which take more than $X minutes must be considered failure.
Reliability? Not this one either - the error returned by a function will still have to be handled separately from an error returned by the network.
Well, I think a fair presumption is that the option compared with is still distributed. Network latency and disruption is always going to be an inherent part of such. The comparison was more against rpc without async.
Primarily, async exposes that the call you are dealing with goes over the network. You can no longer accidentally put a blocking rpc call in the UI thread.
You can easily send multiple concurrent requests without fuzz.
It has built in primitives to add timeouts on any call, even chained ones, without having to rely on complicated thread-watchdogs or checkpointing.
1. It's an abuse of what async/await is. async/await is about concurrency and says NOTHING about the cause of that concurrency.
2. It's an excuse to rationalize what is clearly a bad idea.
RPC should obviously be RPC. What seems obvious to you in that function call isn't going to be obvious in 5-10 years once your sensibilities are gone and it's someone else dealing with the crap you wrote.
> We fought so hard to get rid of crazy RPC protocols (COM+, CORBA) and abominations like SOAP and people keep trying to bring them back.
For those that dare re-kindle the old ways of RPC, only despair, anguish and insanity await. The mindless boilerplate, the infinite depths of parser recursion, the envelopes within envelopes, the madness inducing mysteries of undeclared encoding, the deadly omission of byte order markers, the incomprehensibly complex, yet vague error messages... and the network. The network is waiting to devour your function calls, returning only the broken skeleton and ruptured entrails of what were once perfectly beautiful, good, honest and peace loving return values. The old ways must be locked away, and cast into the deepest fissure in the deepest depths of the ocean, where there is no chance they be rediscovered and unwittingly released on humanity.
Or, how I feel when a vendor sends documentation for their SOAP web service.
Why did DCOM fail? Yes, sure, the protocol documentation is extremely obscure and the tooling kind of sucks, but conceptually? I assume that there’s a reason aside from Microsoft choosing to jump on the web services bandwagon, but I can’t actually find anyone come out and say it. The closest I’ve seen is complaints about scalability, but they don’t agree what’s at fault, either—some point to pinging, some to heavyweight COM+ activation... I don’t get it. Given the relative success of seemingly-isomorphic Java tech, there has to be something.
This would be a good contrast, rather than a vulnerability being hidden for years in rarely-updated software, you get a vulnerability that sneaks in without you knowing you've updated anything.
That’s really well put. When I saw this I just thought, well, that’s probably a lot of complexity to save a few lines of code. And then you’ll just add those lines of code back later when you realise that the client stubs are the easy bit.
Meteor is a very opinionated framework. This project allows one to seamlessly add RPC to an existing frontend+backend codebase. Plus the out of the box TypeScript support for the remote procedures. I haven't seen anything similar before this.
> If it was this easy eveyone would be using RPC all over the place for many years already, and we'd have no HTTP and no train-load of other protocols.
People are using RPC over HTTP all the time. Most backend APIs I've seen are only vaguely RESTful [1] and everyone seems to stuff much of their critical functionality in JSON RPC APIs masquerading as POST endpoints.
The reason RPC is making a comeback is because of 1. SSR (e.g. rendering React components to HTML on the server-side), and 2. the increased practice of continuous deployment.
Increasingly, the frontend is deployed hand-in-hand with the backend.
RPC is a match made in heaven for stacks where backend and frontend are deployed in-sync.
There really are only two reasons for using a RESTful/GraphQL API: providing third parties access to your data (e.g. Facebook API) or decoupling frontend and backend (e.g. Netflix uses a loosely coupled microservice architecture).
I think this is neat having looked at the examples but isn't the title really just a game of semantics?
Non rigorously a web API is just the boundary across some service which you communicate with over (commonly) HTTP at the endpoints specified at that boundary by developers. I could write my own library that wraps those endpoints with functions that make the calls to them, in fact many libraries do just that.
What this takes care of, again kind of nicely, is dynamically and automatically generating that boiler plate as you write your API so you don't have to put in the work, but I think saying this isn't just a style of building an API is a little much.
Whether you're doing RPC, REST or whatever the boundary between your service and another service is it's API.
Remote functions instead of data interchange doesn't have the same ring.
Though I would say that RPC blurs a lot of lines. REST is clearly a way for two programs to communicate. RPC could be seen as a way for two programs to become one distributed program by means of location transparency.
Shared objects and other systems take that a lot further, RPC is in a middle ground that is very interesting but also a little scary. Reading this description I couldn't tell you exactly what the security implications are. With their TODO list app as written, can a malicious user execute arbitrary SQL (within the permissions assigned to their user) on the server? It wasn't immediately clear to me (I'm sure a little more effort on my part could clear it up).
It’s cute that JS/web devs are finally having a revelation about RPC, but it’s doing a disservice saying RPC hasn’t been taken seriously yet. Plenty of industries have been using RPC for decades
Our latest projects are built with RPC-style APIs and a BFF approach. We generate the TS client with Swagger and you effectively get strongly typed remote method calls.
So basically, we're at SOAP again. A few more years and maybe we'll come back to Remoting?
Using JSON-RPC 1.0 here, with ad hoc client and documentation generation. So yes, basically SOAP-like, without the XML and "entreprise java" days cruft. Communication between frontend team and backend team is easy, everyone knows what a function is. No fiddling with ReST concepts for which everybody has its own personal interpretation. It's simple, it works.
About verbosity, I'm not sure what you mean because it's has lightweight as any other JSON exchange. Of course there are other protocols based on binary messages, should you need that. But it's different.
Yea, I've seen a lot of folks doing this and it's neat. Most projects don't really need REST/GraphQL. So many waste so much time implementing a perfect GraphQL setup even though they don't need it :sweat_smile:.
I don't think "API" just means "programmatic interface". It also denotes some kind of decoupling. A library API => decoupling between lib user and lib author. GraphQL API => decoupling between backend and frontend.
The point of Telefunc is precisely to not decouple frontend and backend. Telefunc is all about bringing backend and frontend tightly together.
So, the tagline is very much on point ;-).
But, yea, you make me think that "Remote Functions. Instead of REST/GraphQL." is maybe better. I'll think about it.
It really is a game of semantics as another person commenting mentioned. It might feel like there's something distinct here but all RPC was remote procedure calls aka functions and produced tightly coupled clients and server code. And the term API is not specifically owned by HTTP calls or REST/GraphQL, it's a definition of an interface for consumer some service. Even in your case whether you feel like you see it or not, every function is part of an API.
And if it has to be not, I much rather have an Erlang / OTP-like approach where resilience and failure recovery is so fundamentally baked into the language, it’s an actual feature instead of a liability.
Google has a system called plaque in which you can express DAGs of operators (functions) and it handles all the scheduling and data flow between nodes running on a distributed cluster. This achieves much better performance efficiency at large distributed scale while also providing an easy to reason and debug program logic. I would expect more such distributed computer programming models with dedicated language/compiler toolchains in the future.
Imagine a To-Do app, and you want to implement a new feature "mark all tasks as completed".
With a RESTful API you do these requests:
GET https://api.todo.com/task?completed=false
POST https://api.todo.com/task/42 { "completed": true }
POST https://api.todo.com/task/1337 { "completed": true }
POST https://api.todo.com/task/7 { "completed": true }
With RPC/Telefunc, this is what you do instead:
export async function onMarkAllAsCompleted() {
await sql('UPDATE tasks SET completed = true WHERE completed = false')
}
The crux the matter is that a RESTFull/GraphQL API is a set of generic endpoints that are set-in-stone and that are designed to be able to serve all kinds of frontends, whereas with RPC you implement endpoints as-you-go that are tailored to your frontend.
For example, if you want to provide third-parties access to your data, then RPC wouldn't make sense. You need an API for that.
On the other hand, if all you need is your frontend to access your database, then an API really is just an unnecessary indirection. (Although for large teams an API can be useful to decouple the frontend from the backend.)
There is not reason why your couldn't expose an endpoint that lets you marked completed a filtered query. You can implement arbitrary DSLs over REST / GraphQL.
Philosofically, there is no difference though each have their own connotations.
You expect RPCs to be much more integrated in a language and an API to run over standard transport protocols using standard data formats. However, this is isomorph.
This is pretty neat. You write a function and then a library automatically creates wrapper libs to be called in a browser as if it’s a function call. A variety of frameworks have done similar - SOAP auto-generated Java libs come to mind - but this looks very clean and minimal.
Author of Telefunc here. I'm glad folks are finally taking RPC seriously.
RPC is back. Smarter, leaner, and stronger. (Sorry for the marketing jargon, but I really do believe that.)
Telefunc is still fairly young, you can expect a lot of polishing in the coming weeks/months. The high-level design is solid and I expect no breaking change on that end.
If you encounter any bug (or DX paper cut), please create a GitHub Issue and I'll fix it. I'm also the author of https://vite-plugin-ssr.com which also has no known bug, so I do take that claim seriously.
trpc relies still on strings: No proper refactoring (ie F2 in VS Code) which makes refactoring as slow and error-prone as with REST and GraphQL. Should change with v10 but its API hasn't been finalized yet and ETA is far in 2023 if at all. Also overall API design has missed opportunities.
Telefunc should support zod, why shouldn't it? Just pass them as native zod types and all good. You could also convert them before with z.infer but you don't need to.
> It seems like this library has its own bespoke syntax for types
Only for stacks which don't transpile server-side code. You can use normal TS types with something like Next, Nuxt, Svelte, Vite, etc. So, these bespoke types aren't relevant for the majority.
I believe this is only the case with routes, and they're still statically typed. E.g. if you only define `/getUser` and try to invoke `/gteUsre`, it will yell at you.
> Telefunc should support zod, why shouldn't it? Just pass them as native zod types and all good. You could also convert them before with z.infer but you don't need to.
Can you elaborate on this? AFAIK it only supports the types listed here https://telefunc.com/shield#all-types So my understanding is that you can't pass native zod types.
Zod allows for more complex validation like "input cannot exceed 256 chars", "number must be positive", or even arbitrary "refinements". I could, of course, simply write a function that will `Abort` instead, but tRPC+Zod handles that automatically - zero boilerplate. My current understanding is that I'll need to invoke Abort manually if the Zod.parse fails.
Importantly Zod philosophically follows "parse, don't validate" https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va... which means that once validation is done, you get a type - a parsed value. This means you can validate clientside and serverside using the same types. Again, sure you could do the same with Telefunc, but you still need to invoke Abort manually and every single RPC definition you have needs to be followed by Zod.parse. A small cost to be sure, but what are the advantages of Telefunc over tRPC? I'm not seeing any right now.
tRPC also isn't strongly tied to Zod, it also works with yup, superstruct, and custom validators.
I still don't see a concrete use case that would justify using Zod in favor of `shield()`. Both "input cannot exceed 256 chars" and "number must be positive" are one-liners in JavaScript.
I'm more than open to support Zod & co, but I'd like to see a concrete use case for it.
Fair enough. Still, this makes your original comment regarding "less boilerplate" kinda ironic... and I'm still not seeing any mention of advantages over tRPC. I'm more than willing to have my mind changed but you aren't giving me much material.
It is ironic because AFAICT Zod is actually more boilerplate than `shield()` :-). Although, the Zod familiarity is a clear argument in favor of Zod.
Speaking of boilerplate, one Telefunc feature I'm particularly excited about is its capability to automatically generate runtime type safety, so you don't even need `shield()` nor Zod then. But it's an experimental feature, let's see how it works out at scale.
The main problem with Telefunc right now is the lack of clear documentation which I'm going to work on in the next weeks/months. Hopefully, it will then become clear why Telefunc is simpler and less boilerplate. Stay tuned.
In the meantime feel free to check out the `examples/` and feel free to hit me up on Discord.
v10 is in beta and will go into RC fairly soon. I'd put my money on a 2022 final release. I'm already using it in production and it's rock solid. Only complaint is still having to restart the TS server in your editor occasionally due to all the inference magic it does.
Nice, I do something similar in a web framework I've been working on. All rendering happen on the server so the callbacks has to run there too. The onclick-handler in the DOM will trigger a POST to the server with an unique callback id.
def self.get_initial_state(initial_count: 0, **)
{ count: initial_count }
end
def handle_decrement(_)
update do |state|
{ count: [0, state[:count] - 1].max }
end
end
def handle_increment(_)
update do |state|
{ count: state[:count] + 1 }
end
end
def render
<div class={styles.counter}>
<button on-click={handler(:handle_decrement)} disabled={state[:count].zero?}>
-
</button>
<span>{state[:count]}</span>
<button on-click={handler(:handle_increment)}>
+
</button>
</div>
end
Really odd how there is a dozen comments on RPC but no one mentioned gRPC for the web browser. Google goes in the reverse, where from both protobuf you autogenerate API boilerplate, client stubs and even REST-over-HTTP apis.
The frontend-side also throws an error. `throw Abort()` is about protecting telefunctions from third parties. It shouldn't be used to implement business logic (simply use `return` instead).
I actually have plans to make Telefunc also have sensible defaults for network failure. So that the user doesn't have to take care of that (while the user can customize failure handling if he needs to).
We have a hand-rolled solution that does this and it's easily my favourite part of our codebase. Ours is with a Java backend so a bit more complicated but not much.
If you have a tightly coupled frontend and backend and no need for third party consumers of your API, I highly recommend it. Makes moving between frontend and backend such a pleasant experience with a well typed and defined interface.
The thing about people with experience, we love to tell you about the past. The cycles are all the same, technology A is created, it has some level of success, then something else (B) is adopted. A new generation of developers have only experienced B and due to its shortcomings reinvent A as C. And the cycle repeats over and over.
This seems like a very nicely done implementation.
One negative: I don't think people realize how easy it will be to inadvertently include server side files in client side bundles.
The technical approach mitigates this and is well designed. But simply by blurring the lines it might result in sever side config bundled to the client.
(Easy to mitigate, just add a line to server side stuff which and destructs if it detects wrong environment)
The way people abuse HTTP JSON API design these days I figure it would just cut out a lot of the cruft. Maybe it's not the best way to do some things, but it might be a lot simpler...
Well that is until someone writes a recursive function (or just a loop) which calls out to an RPC without consideration. But that's nothing new either....
How is `fetch(base + '/myFn')` all that different than just calling `rpc::myFn`?
The badness tends to not start in your example - which is the most trivial example of an RPC - but when you don't know / care if a function is really a stub for some kind of remote invocation. Before you know it you're knee deep in a sea of distributed systems problems where instead of a clean separation between client and server, half your client is on the local machine, the other half is on a remote machine, and the state is also weirdly split. Of course, you can replicate the same mistakes with a REST API, but at least that's by definition more explicit about state transfer etc.
The thinking around a similar (but more mature) project, tRPC is interesting around this. They're very clear about the fact that it's designed for tightly coupled typescript web app monorepos. I believe the creator even was talking about the name not being ideal, but it might be too late to change.
Aside from maybe being able to generate an OpenAPI spec (something that the tRPC community is working on), it's just going to fall apart outside of web apps.
But for those developers it's great for eliminating this weird step where we need to think about all our data going into a completely unstructured string (json) when going between our server and client. For most full stack JS/TS apps that's all you need.
So much code is an internal client function "GetUser(5)" configured to hit a specific URL "/users/5" and the server accepts requests on "/users/5" just to end up invoking a handler called "GetUser(5)".
Definitely a fan of completely skipping the URL "REST" ceremony.
We program things to make our jobs easier. If it looks like a call, takes args like a call and returns values like a call, it probably is a call. If you have nothing against a call, you have nothing against RPC. Give me TCP, SMTP-over-SOAP-over-HTTP, XML-over-PigeonPost, whatever. I’ll hide this nonsense in an rpc.<langname> file and turn it into a call, because calls are my main programming instrument. Everything else is just an annoying noise that either takes a page to perform or gets wrapped into the exact same parametrized call.
As always, the devil is in the details. SOAP didn't get complicated for absolutely no reason. As your application grows, you'll need to think about error / exception handling, backwards compatibility (new parameters), and versioning, to start.
These nuances are not unique to a protocol. The fact that some prior tech wasn’t smart enough to think about these doesn’t mean that someone will inevitably repeat it. “SOAP” is like an elevator anxiety trigger because one of them failed brutally in the past.
As you application grows
No, you have to design it in from the beginning. How can someone not think of exception borders, API contracts, safety/idempotency and versioning even for a single call?
I see that errors are handled well by telefunc. Other issues are easy to do in the same way you do it in REST/POST, i.e. foo(…, new_arg), foo_v2(…), plus some docs. It is literally just a different form with the same content, which is closer to the language and doesn’t require pages of almost identical parametrized wrappers.
You’d be surprised how often people fail to consider things like versioning when building APIs. Yes, they absolutely should be considered from the beginning.
I’ve been burned in the past by RPC (mainly SOAP), but will take a deeper look and try to keep an open mind. Building “REST” APIs is damn tedious.
It's true however, RPC in it's simplicity loses a lot of the niceties of HTTP and REST. You have no conventions to go off of for state control, no standard for error handling, etc.
Maybe you want something more barebones, maybe you want to leverage existing protocols. These days, it's kinda the wild west out there in terms of web standards it seems, so go nuts I guess.
I will admit that RPC does feel more natural for many applications. In the old days, I worked on an app that used XML-RPC and it was fine. If you can avoid the complex tooling and it works for you / your team, great!
That’s understandable! Are there specific constraints you are thinking of? I did a simple implementation of RPC for workers a few years ago, but it would be really nice to get the import transforms Telefunc handles for you.
If I recall correctly doing this with pickle does at least replicate most if not all of the security issues that java RMI has, so I guess it's most of the way there.
I think it’s interesting; I experimented with a higher-magic version of this a couple of years ago. (It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions, which was really fun when it was working.
My experience was that for the simplest use cases, the ergonomics were unbeatable. Just drop a `@remote` tag into your TodoMVC frontend, and there’s your TodoMVC backend. But as soon as I had to look under the hood (for something as simple as caching behavior), the “just a function” interface was a distraction and I found myself wishing for a more conventional RPC interface.
(I think that might be why you tend to see this sort of magic more in the context of a full framework (such as Meteor), where all the typical use cases can be answered robustly. If you’re building your thing out of modules, cleverer abstractions usually mean more work overall to integrate everything and boring is better.)