Hacker News new | past | comments | ask | show | jobs | submit login

Cool! Y’all, this is obviously an RPC interface, but the point is that `import { onNewTodo } from './CreateTodo.telefunc.js'` is not, as written, an RPC. That transform, with seamless type support et cetera, is what makes this interesting. (If you think it’s interesting.)

I think it’s interesting; I experimented with a higher-magic version of this a couple of years ago. (It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions, which was really fun when it was working.

My experience was that for the simplest use cases, the ergonomics were unbeatable. Just drop a `@remote` tag into your TodoMVC frontend, and there’s your TodoMVC backend. But as soon as I had to look under the hood (for something as simple as caching behavior), the “just a function” interface was a distraction and I found myself wishing for a more conventional RPC interface.

(I think that might be why you tend to see this sort of magic more in the context of a full framework (such as Meteor), where all the typical use cases can be answered robustly. If you’re building your thing out of modules, cleverer abstractions usually mean more work overall to integrate everything and boring is better.)




Joel Spolsky had an ancient article titled "Three Wrong Ideas From Computer Science" (2000): https://www.joelonsoftware.com/2000/08/22/three-wrong-ideas-... Some parts of this don't hold up that well.

But it has a section titled "Network Transparency" that talks about when transparency of fails. And he calls out three things that make network calls fundamentally different:

1. Availability. It's possible that your network "function" will be impossible to call for some amount of time.

2. Latency. The round-trip latency of RPC calls is extremely high, which affects API design in major ways. Typically, you wind up wanting to perform multiple queries or multiple operations per round-trip.

3. Reliability. Network calls may fail in many more ways than local calls.

And these are all issues that have defeated many "transparent" RPC systems in the past. Once your application spreads across multiple machines that interact, then you need to accept you're building a distributed system. And "REST versus RPC" is relatively unimportant compared to the other issues you'll face.


> he calls out three things that make network calls fundamentally different:

Joel Spolsky was not the originator of these ideas; their most famous form comes from about 1994, as written by L Peter Deutsch:

https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...


To JS developers' credit, writing modern JS in many ways already is like writing a distributed system.

Even if you stay fully client-side, you have bits of code that are running in different parts of the browser and you have to marshal data back and forth between them. Many web API functions are really RPC calls into a different browser process.

JS has has primitives like channels, promises or async/await to deal with those.

At least there might be some ways how you could use those as as building blocks for an RPC system which lets you handle the three problems better.


The client-server relationship is distributed but the intra-browser stuff is just async IPC. Writing robust clients against remote APIs requires considerations around retries, failover, idempotency and read/write consistency. It seems to me that distributed patterns in pure client code are thin on the ground and limited to things like CQRS and Sagas.


It's not like, it is a distributed system.

With PWAs web apps can and do work offline or work limited network. There's a whole series of problems this is useful for.


Web RPC protocols are not trying to replace local function calls though. They're trying to replace remote API calls (REST, GraphQL etc.). And all remote protocols have to solve problems like availability, latency, and reliability. Those problems remain. This is trying to solve a different problem with different trade-offs.

The problem it solves is unnecessary protocol noise and the trade-off is tight coupling between frontend and backend code. I think it's obvious that the trade-off makes sense for small projects and MVPs: most changes require touching both ends and typically the team isn't differentiated between backend and frontend anyway.

But even with large projects that have to cater to many clients, a tightly-coupled RPC system can solve problems like overfetching and underfetching as a straightforward method for implementing the BFF pattern[1].

[1] https://medium.com/mobilepeople/backend-for-frontend-pattern...


Exactly.

Telefunc is about replacing generic endpoints (RESTful/GraphQL APIs) with tailored endpoints (RPC).

That's why the trade-off is simpler architecture (generic endpoints are often an unnecessary abstraction) VS decoupling (tailored endpoints require the frontend and backend to be deployed in-sync).

In the long term I foresee RPC to be also used for very large teams. I've ideas around this, stay tuned. I can even see companies like Netflix ditching GraphQL for RPC, although this won't happen any time soon.

In the meantime, RPC is a natural fit for small/medium sized teams that want to ship today instead of spending weeks setting up GraphQL.


Every generation keeps trying rpc and learns its lesson…eventually.

On windows it was DCom, then COM+ then remoting the WCF then who knows I lost track.

Rest APIs are simple and easily debuggable, magic remote api layers are not.

That’s why REST is still prevalent even in websockets had better better performance parameters (and in my testing it did have performance advantages) yet 7 years after my testing hire many sites are running websockets?


For lots of APIs this is somewhat true. However, I recently took a deep dive into "REST" and realized that for many APIs, you really have to contort how you think in order to make it fit into that model.

It generally works for retrieving data (with caveats...), but I found that when modifying data on the server, with restricted transformations that don't necessarily map 1-1 with the data on the server, it feels like forcing a square peg into a round hole. I tend to think of verbs in that case, which starts to look like RPC.

("REST" is in quotes because the REST model proposed by Fielding (ie, with HATEOAS) looks almost nothing like "REST" in practice).


If Telefunc is bug-free (which I intend to be the case), then you won't need to debug Telefunc itself.

For error tracking, Telefunc has hooks for that.

For debugging user-land, you can `console.log()` just like you would with normal functions.

Do you see a concrete use case where this wouldn't suffice?


> If Telefunc is bug-free (which I intend to be the case), then you won't need to debug Telefunc itself.

And if no one crashes, you don't need airbags.

If I'm being blunt, reality doesn't give a shit what you think. It's better to design with the assumption there are bugs so that _WHEN_ it happens, the users are up a creek without a paddle.

These sorts of implicit assumptions are how painful software is made.


Software can be verified to be correct, stupidity of others is unavoidable.


Did someone solve the halting problem while I wasn't looking?


No, but somebody created type checking, linting and testing.

Not sure what's up with the CS people and their halting problem. In the industry we've solved (as in developed ways to deal with) the problem of verification decades ago.

Also, debuggers. Nobody said the verification can't be done by a human.


verifying software is correct implies solving the halting problem.

What you mean is "no known bugs", so may be use those words instead. "verification of correctness" has a specific meaning in our industry.

yeah yeah, I get it, those stupid CS people and their P=NP talk. Don't they know you can obviously verify correctness without verifying it for all possible inputs? What next, you can't prove a negative such as absence of bugs?!?


> verifying software is correct implies solving the halting problem.

No, producing a program that can verify that all correct programs are correct implies solving the halting problem.

Verifying a particular piece of software is correct just implies you've proved that one piece correct. (And probably wasted your time dicking around with it only to find that the actual issue was in software you treated as 'outside' of the software you were verifying...)


What you're describing is the programming version of approximation. It's understood that it has a margin of error due to the approximation.

What you're claiming here is that if you check enough inputs you've proven it correct, and what I'm telling you is that's not the case.

The fact is, nothing has been proven, the wording on the webpage itself is more honest (no known bugs and a suite of automated tests).

To verify a program is correct is a much stronger claim than what is even on the website. And that requires restricting the input or solving the halting problem.


Generic endpoits are a design smell. Junior devs making Junior dev problems because someone who's been coding for 10 minutes wrote a series on api design on Medium.

See the backend for front end pattern.

Debuggabilty wins.


Interestingly there was some research around the latency issue, trying to automatically rewrite the client code so that remote calls are batched together. (This was done in Java where rewriting the .class file was tractable): https://www.doc.ic.ac.uk/~phjk/Publications/CommunicationRes...


Availability and reliability can be handled similarly by thinking about classes of failure modes. Most RPC frameworks have error codes that are too fine grained (see HTTP and it’s cousin REST).

Latency is actually mostly solvable with something like cap’n’proto that pipelines the promises so that you can have a normal looking composable functions without the latency cost.

Joel Spolsky advice is usually fine but it’s important to understand his commentary tends to be about what ideas are in vogue / dominant at the time of the post and tend to have a more limited perspective (vs listening to researchers in the field actively investigating the problems and investing in ways to find solutions).

For example, we have an explicit class that manages RPC calls so that at the call site it looks like a regular function. But since it’s I/O it returns an async so it makes it clear that it’s I/O (which you have to do with non NodeJS JavaScript). Additionally, it returns a promise of a Result object type just like any other failable call which you have to handle properly. Works pretty well (although in retrospect I’m now wishing we had used structured clone instead of JSON to send messages around).


Sorry, but this post just shows a lack of experience in dealing with distributed systems. Joel was bang on the money about the problems of disguising network interactions in applications.

Async vs Sync is not enough. Network partitions have multiple failure modes depending on a range of factors, but one of the most pernicious is the “undetectable cloud break”, an invisible break between two routers. Having a well defined “api” that is the clear “we are communicating over the network here” makes it much easier to reason about and handle these issues.

There is an enormous difference between “this I/O call failed” and “this I/O call took 10 minutes to fail”.


vlovich123 built Cloudflare's R2, I think he has some experience with distributed systems.

The RPC debate is confused because the two sides of the debate have different ideas of what RPC is. The anti-RPC crowd is arguing about RPC systems from the 80's, which tried to make RPC look exactly like a local function call, including blocking until the call was complete. The pro-RPC crowd is mostly speaking from experience using modern RPC systems which do not do this; they use asynchronous callback APIs, have some mechanism for reporting network errors, etc. Modern RPC systems are called RPC because they encourage you to organize your network API conceptually into something like function calls, but they certainly do try to hide the fact that there is a network here that can fail in all the ways networks fail.


I don't see why automatically converting functions to RPC APIs is incompatible with a clear "we are communicating over the network". Just put all the RPC functions in their own `network` namespace or whatever.


One counterpoint to that is that this works very well in Erlang (and Elixir) and has done so for 30+ year. Well, I might be cheating a little bit with my definition of "so" :)

I'll agree with you that it's not a trivial change, far from it. You need switching your mindset from local to remote, from procedures to agent messages, from exceptions to non-local error-handling, but it remains quite simple and readable.

I feel that Telefunc is bringing developers a good chunk of the way towards that goal. And, who know, maybe there is another simple model that is not exactly agents, and that would also work nicely.


> Most RPC frameworks have error codes that are too fine grained (see HTTP and it’s cousin REST).

Are HTTP and its cousin REST examples of RPC frameworks? I don’t think so, and I may have misunderstood your point. But if you do consider them to be RPC, can you explain your reasoning?

Despite drawbacks here and there, I do have appreciation for RPC, e.g. JSON-RPC is widely used in the distributed/decentralized tech space I work in. It’s fairly well-specified and well-suited for a variety of use cases:

https://www.jsonrpc.org/specification


Not the GP, but I would definitely consider REST a layer in a RPC framework.


> I would definitely consider REST a layer in a RPC framework.

A beef patty is not a hamburger without the bun, pickles and condiments.


The distinction that it has to look like a normal function call is unhelpful. To me RPC and IPC are synonymous (which I get isn’t the Wikipedia definition). I lump them all together as “distributed communication”. The actual problems I think about are:

* semantics of what the request / response looks like and how failures communicate

* how are the APIs composed? Not the local function call, but the functionality calling the API implies (see cap’n’proto’s page on composable remote APIs - it’s really brilliant).

* can you communicate across programming languages?

* how is ownership of remote resources managed?

* how is routing / dispatch of API calls managed?

For example “fetch” in JS land looks a hell of a lot to me like a normal function call to me. And I certainly then wrapped it with easier to read APIs that looked more normal and handled routing and error handling for me. The main difference with HTTP is that RPC systems traditionally autogenerate a bunch of code for you. But in HTTP land you now have efforts like OpenAPI to do similar things. So HTTP isn’t RPC but the autogenerated code from a description of the API makes it RPC? That gets you into the ship of Theseus paradox which isn’t helpful. At what amount of wrapping it with local functional calls does distributed API calls transform into RPC?

To me it turns out that async / sync is a key distinction and the original sin of RPC in the 80s. It’s also why I view the coloring problem as a good thing (at least given where the tech stack is today). Making functions that may be doing I/O stand out and needing thought is a good thing. The problem with many RPC/IPC mechanisms from the 80s is they tried to make the code seem synchronous not that it looks like a function call. But I haven’t seen any RPC systems where async vs sync is the distinguishing characteristic (eg cap’n’proto and fetch are both async).

To me, HTTP, REST, SOAP, COM, gRPC, cap’n’proto, JSON-RPC, XPC, Mach etc all try to solve the same problem of distributed communication and processing, just by making different trade offs.

HTTP tried to standardize RPC semantics but you can clearly see it’s optimized particularly around static document retrieval and typical browser tasks. I’ll buy that static document retrieval need not be classified as a remote API but there’s a lot of non static document retrieval that’s layered on the same semantics that is proper API calls (notably REST) tries to get everything to conform to that so that middleware has some convention and you get browser interop. It works surprisingly well because a good majority of apps are CRUD and don’t need anything more.

JSON-RPC tries to solve routing/dispatch to be more automated if I recall correctly because REST is such a cluster (look at all the routing libraries which to me are an anti pattern). + JSON-RPC is particularly easy in JS which is usually the language on one side of the transaction.

COM solves all of the above and things like distributed ownership of resources, except it historically didn’t have a good security model (I don’t know it well enough to know what’s improved there but I don’t assume it’s stayed static since the 90s even if it hasn’t found traction outside Microsoft).

They all try to solve different parts of the problem or have different sets of tradeoffs, but to me it’s fundamentally the same problem. It’s the same reason I don’t view distributed systems as something that requires different machines, CPUs, processes or anything about geography. You can have distributed systems within one process, you can have it within processes on the same CPU, you can have it between CPUs on the same device, etc. Heck, to me even the kernel and user space form a distributed system with syscalls as a primitive RPC mechanism. JNI is a form of RPC too although you’ve seen traditional systems like gRPC supplant it even within process because it moves complexity out of the JNI layer into a simpler system with easier to follow rules.

Now you can sometimes makes simplifying assumptions to reduce complexity, reduce power usage etc. and not all distributed systems necessarily have the same sets of problems. But fundamentally to me it’s all the same problem space and why I don’t distinguish the terminology so much.


Yes, and that's why in Telefunc's world remote functions are called "telefunctions" to differentiate them from normal functions.

Telefunc doesn't try to completely abstract away the network: as you say, it's not possible to abstract away network latency and network failure. Telefunc is about creating network endpoints as easily as creating functions.


And you can just figure out a solution for these things as there is no one solution. There are many different tradeoffs needed based on the call.


> My experience was that for the simplest use cases, the ergonomics were unbeatable.

Exactly.

> the “just a function” interface was a distraction and I found myself wishing for a more conventional RPC interface.

For real-time use cases, I agree. I believe functions are the wrong abstraction here for real-time. I do plan to support real-time but using a "it's just variables" abstraction instead, see https://github.com/brillout/telefunc/issues/36.

> cleverer abstractions usually mean more work overall to integrate everything and boring is better.

Yes, Telefunc's downside is that it requires a transformer. The Vite and Webpack one are reliable and used in production (using `es-module-lexer` under the hood). The hope here is that, as Telefunc's popularity grows, more and more stacks are reliably supported. Also, there is a prototype using Telefunc without a transformer at all, which works but needs polishing.

> I experimented with a higher-magic version of this a couple of years ago.

Neat, curious, is it on GitHub?


> I think it’s interesting; I experimented with a higher-magic version of this a couple of years ago. (It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions, which was really fun when it was working.

That sounds familiar! We were doing all of that for Opalang arount 2009-2010 if I recall my calendar correctly. Were you there?


>It had continuations to support returning callbacks, could preserve object identity, and could lift inline anonymous functions to the server with some amount of intelligence.) My goal was to make the RPCs as close as possible to local async functions

Are you me? I made this as a fun project a while ago, and left it there gathering dust, but earlier this year it found new life in an AI application and it's delivering promising results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: