Every generation keeps trying rpc and learns its lesson…eventually.
On windows it was DCom, then COM+ then remoting the WCF then who knows I lost track.
Rest APIs are simple and easily debuggable, magic remote api layers are not.
That’s why REST is still prevalent even in websockets had better better performance parameters (and in my testing it did have performance advantages) yet 7 years after my testing hire many sites are running websockets?
For lots of APIs this is somewhat true. However, I recently took a deep dive into "REST" and realized that for many APIs, you really have to contort how you think in order to make it fit into that model.
It generally works for retrieving data (with caveats...), but I found that when modifying data on the server, with restricted transformations that don't necessarily map 1-1 with the data on the server, it feels like forcing a square peg into a round hole. I tend to think of verbs in that case, which starts to look like RPC.
("REST" is in quotes because the REST model proposed by Fielding (ie, with HATEOAS) looks almost nothing like "REST" in practice).
> If Telefunc is bug-free (which I intend to be the case), then you won't need to debug Telefunc itself.
And if no one crashes, you don't need airbags.
If I'm being blunt, reality doesn't give a shit what you think. It's better to design with the assumption there are bugs so that _WHEN_ it happens, the users are up a creek without a paddle.
These sorts of implicit assumptions are how painful software is made.
No, but somebody created type checking, linting and testing.
Not sure what's up with the CS people and their halting problem. In the industry we've solved (as in developed ways to deal with) the problem of verification decades ago.
Also, debuggers. Nobody said the verification can't be done by a human.
verifying software is correct implies solving the halting problem.
What you mean is "no known bugs", so may be use those words instead. "verification of correctness" has a specific meaning in our industry.
yeah yeah, I get it, those stupid CS people and their P=NP talk. Don't they know you can obviously verify correctness without verifying it for all possible inputs? What next, you can't prove a negative such as absence of bugs?!?
> verifying software is correct implies solving the halting problem.
No, producing a program that can verify that all correct programs are correct implies solving the halting problem.
Verifying a particular piece of software is correct just implies you've proved that one piece correct. (And probably wasted your time dicking around with it only to find that the actual issue was in software you treated as 'outside' of the software you were verifying...)
What you're describing is the programming version of approximation. It's understood that it has a margin of error due to the approximation.
What you're claiming here is that if you check enough inputs you've proven it correct, and what I'm telling you is that's not the case.
The fact is, nothing has been proven, the wording on the webpage itself is more honest (no known bugs and a suite of automated tests).
To verify a program is correct is a much stronger claim than what is even on the website. And that requires restricting the input or solving the halting problem.
On windows it was DCom, then COM+ then remoting the WCF then who knows I lost track.
Rest APIs are simple and easily debuggable, magic remote api layers are not.
That’s why REST is still prevalent even in websockets had better better performance parameters (and in my testing it did have performance advantages) yet 7 years after my testing hire many sites are running websockets?