I don't know how many times I looked at the output of c-preprocessors and compilers to figure out what the heck was going on. One choice example of this was a pretty complex system that managed to call a top level routine from somewhere deep down in the stack if an error occurred (which promptly led to a stack overflow that would only very rarely trigger).
The 'nonblocking' here is just a symptom of a much larger problem: abstraction is a great way to build complex stuff out of simple parts but it is also a great way to introduce all kinds of effects that you weren't aiming for in the first place and this particular one is easier than most to catch. You can find the same kind of problems at all levels of software systems, all the way up to the top where dosomethingcomplex() and if it fails dosomethingcomplex() again is the cure.
Writing easy to understand code is a big key to solving this kind of problem, I've always tried (but probably never succeeded) in writing code in the simplest way possible, as soon as I find myself reaching for something clever I feel it is a mistake. Either circumstance (some idiot requirement, such as having to use the wrong tools for the job) or need may be used occasionally to transgress the rule but if you do it with any regularity at all (and without documenting the particular exception and the motivation to go outside of the advised lines) you are almost certainly going to regret it. (Or your successor may one day decide to pay you a house-call with a blunt object...)
> abstraction is a great way to build complex stuff out of simple parts but it is also a great way to introduce all kinds of effects that you weren't aiming for in the first place and this particular one is easier than most to catch.
This isn't a problem when abstractions don't leak. Polishing abstractions until they don't leak is super hard, though.
Ignoring all that - in order to abstract something, you have to either 1) make assumptions or 2) establish a method for the configuration of those assumptions.
We all (naively) want it just to be handled for us. But sometimes that doesn't work out. We are the ones who have to learn that; the Second Law of Thermodynamics ( which is the lynchpin of the Two Generals Problem ) is unlikely to change to accommodate our foolishness :)
As I understand you, "polishing abstractions until they don't leak" is equivalent to "doing the whole job, not just part of it." Economically, this is a pain point for the people we work for. It sounds expensive. The accounting for it is very difficult. "Can't you just make it work" is not unreasonable.
Enforcement is the entire point. A failed return from a recv() may be an application problem. It doesn't compress.
> ..software is a purely logical artifact
No. No, sir , it is not. There is no magical unicorn version of communications in which you can simply assume it all always gets there instantly and in order. We can get close - severely underutilized Ethernet & 802.11 spoil us - but nuh uh.
> Make no mistake, it is expensive.
And you wonder why they are like they are :) "you can't afford it, honey." :)
> No. No, sir , it is not. There is no magical unicorn version of communications in which you can simply assume it all always gets there instantly and in order. We can get close - severely underutilized Ethernet & 802.11 spoil us - but nuh uh.
That simply means you want an unimplementable abstraction. (Perfectly reliable sequential communication over a computer network.) Of course it doesn't make sense to want impossible things.
> And you wonder why they are like they are :) "you can't afford it, honey." :)
This brokenness can't be fixed at the level of business applications. Languages and standard libraries need to be fixed first.
I forget what the thing you just did is called, but you've managed to switch sides. :) I'm the one who said there is no unicorn version etc. ....
You can't fix that in a library. There is a sequence of escalation. Failures are formally checked-for and counters are incremented, alarms are sent, actions are taken...
You may not be interested in the Second Law, but the Second Law is interested in you.
> I forget what the thing you just did is called, but you've managed to switch sides. :)
I didn't switch sides. I stand by my assertion that software is a purely logical artifact. The laws of thermodynamics have no bearing on whether redirecting the control flow to a far-away exception handler (or, even worse, undefined behavior) is a reasonable way to deal with unforeseen circumstances.
> I'm the one who said there is no unicorn version etc. ....
I'm not talking about unicorns, only about abstractions that don't leak. That being said, I'll admit that sometimes there are good reasons for using leaky abstractions. My favorite examples of this is garbage collection. The abstraction is “you can always allocate memory and you don't need to bother deallocating it”. The second part is tight, because precise collectors guarantee objects will be reclaimed a bounded number of cycles after they become unused. But the first part is leaky, because the case “you've exhausted all memory” is uncovered. The reason why this isn't a problem in practice is that most programs don't come anywhere near exhausting all available memory, and, if it ever happens, ultimately the only possible fix is to add more RAM to the computer.
FWIW, I don't consider TCP a leaky abstraction, because it doesn't promise that actual communication will take place. It only promises that, if messages are received, they will be received in order by the user of the abstraction. That being said, most TCP implementations are leaky, as is pretty much anything written in C.
The 'nonblocking' here is just a symptom of a much larger problem: abstraction is a great way to build complex stuff out of simple parts but it is also a great way to introduce all kinds of effects that you weren't aiming for in the first place and this particular one is easier than most to catch. You can find the same kind of problems at all levels of software systems, all the way up to the top where dosomethingcomplex() and if it fails dosomethingcomplex() again is the cure.
Writing easy to understand code is a big key to solving this kind of problem, I've always tried (but probably never succeeded) in writing code in the simplest way possible, as soon as I find myself reaching for something clever I feel it is a mistake. Either circumstance (some idiot requirement, such as having to use the wrong tools for the job) or need may be used occasionally to transgress the rule but if you do it with any regularity at all (and without documenting the particular exception and the motivation to go outside of the advised lines) you are almost certainly going to regret it. (Or your successor may one day decide to pay you a house-call with a blunt object...)