Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nope.

ATT’s implementation had this really stupid behavior and they snuck it into POSIX because that meant they wouldn’t have to change their existing code. No one else realized how stupid it was until it was too late.

The design is stupid. There is no excuse for it.



Right. This article by Jeremy Allison sheds some light on how this broken design ended up in POSIX.

https://www.samba.org/samba/news/articles/low_point/tale_two...

> The reason is historical and reflects a flaw in the POSIX standards process, in my opinion, one that hopefully won't be repeated in the future. I finally tracked down why this insane behavior was standardized by the POSIX committee by talking to long-time BSD hacker and POSIX standards committee member Kirk McKusick (he of the BSD daemon artwork). As he recalls, AT&T brought the current behavior to the standards committee as a proposal for byte-range locking, as this was how their current code implementation worked. The committee asked other ISVs if this was how locking should be done. The ISVs who cared about byte range locking were the large database vendors such as Oracle, Sybase and Informix (at the time). All of these companies did their own byte range locking within their own applications, none of them depended on or needed the underlying operating system to provide locking services for them. So their unanimous answer was "we don't care". In the absence of any strong negative feedback on a proposal, the committee added it "as-is", and took as the desired behavior the specifics of the first implementation, the brain-dead one from AT&T.


That's the first design-related documentation I've seen presented, so thank you.

However, it's merely an indictment of the POSIX standards process and sheds no light on why the AT&T implementation was the way it was in the first place.

I keep reading what, to me (essentially an outsider with no horse in the race), sound like either hyperboles at worst ("brain-dead", "really stupid", "no excuse") and arguments advocating the MIT Method (of "Worse is Better" fame) at best.

It tends to beg the question, "If it's so horrible, why did anyone bother to implement it that way or use it once it was there?" I'd expect, if it's actually as broken as everyone makes it out to be, that it would be worse than nothing and would get no use.

Apparently, the major db vendors didn't bother, at least at the time, but that could well be because there was no reliable [1] cross-platform option.

So, again, how about some actual, contemporaneous evidence of the original design process, for a fair, contextual critique?

[1] i.e. reliable or good enough, not reliable in the sense of implementation correctness covering all corner cases, as seems to be demanded by certain commenters and the MIT Method in general.


What would falsify your position? Is there any evidence that you would not reject with "but maybe it was good enough"?

If I presented a case where a worker was killed because of this locking mechanism, wouldn't you just say "but maybe it prevented the meltdown of a nuclear reactor, so it was a net positive vs. no locking at all, therefore, it was the right thing to do"?


I'm not sure you actually understand my position, which has nothing to do with which philosophy is "right" or "better", but, rather, that the one you seem to object to so strenuously on moral grounds, both existed and was a valid engineering consideration/strategy at the time.

To falsify it, you would merely need documentation that such a philosophy was not a consideration in this design.

If it's falsified, then there may well be something new to be learned about design and mistakes to be avoided.

Otherwise, it's just another example of "Worse Is Better" and isn't worth the effort.


A hacky implementation can be simultaneously good enough for some users and also completely unacceptable for a standard.

It's not worse than nothing, sure. If you use it exactly as expected, and only have one logical user of a file in each process, it does an acceptable job. But in the context of what it's supposed to do, be a generic locking mechanism, it's horrifically broken.

So I'll summarize this way: It's completely okay that they wrote this code, as a "version 0.2". But there's no excuse for presenting it as finished code, with reasonable semantics. It's not hyperbole to say that.


> also completely unacceptable for a standard.

[...]

> It's completely okay that they wrote this code, as a "version 0.2". But there's no excuse for presenting it as finished code, with reasonable semantics. It's not hyperbole to say that.

Am I missing something, or are you just saying "Worse is Worse"?

The "excuse" is that this way (arguably, perhaps) governed the history of Unix development, well before there was even a standard.

What I've been attempting to get people (especially ones with the seemingly most strenuous invectives toward the design) is to perform the thought experiment of placing themselves in the "shoes" of the designers, both by trying to imagine being in that past and, much harder, actually believing in that philosophy.

I believe that will go much farther in increasing understand and, to borrow from the HN guidelines, gratify intellectual curiosity, than arguing against strawmen (or just non-existent proponents) of design goodness.


> Am I missing something, or are you just saying "Worse is Worse"?

No, I'm not. It's fine that they made that code, and were using it.

But different situations have different requirements.

I'm not objecting to the design work at all.

I'm objecting to the idea of calling it "ready to standardize". This is a presentation problem, not a development problem. It was half-baked, and shouldn't have been set in stone until it was fully baked.


> I'm objecting to the idea of calling it "ready to standardize".

Oh, indeed, that's a distinctly different topic than has been focused on in the rest of the thread. The upthread indictment of the process is quite on point.

It's also a wide topic with a wide variety of involved parties, worthy of its own thread, off of a blog post.


> both existed and was a valid engineering consideration/strategy at the time.

Please define what you mean by "valid".

> To falsify it, you would merely need documentation that such a philosophy was not a consideration in this design.

Well, then I probably don't care? I care whether it was a bad idea, not whether it was a bad idea coming from a bad philosophy or a bad idea standing on its own.


> Well, then I probably don't care? I care whether it was a bad idea, not whether it was a bad idea coming from a bad philosophy or a bad idea standing on its own.

If your goal is to prevent such a "bad idea" in the future, then not caring could easily work against you.

You'd end up having to expend energy challenging each bad idea on its own, possibly failing even to make inroads because there's a bad philosophy that makes it seem like a good idea (maybe even obviously so, rendering your challenge easily dismissed and a wasted effort).

Instead, if you focus your energy on challenging the bad philosophy, you both get to the heart of the matter right away, and you cover all the new bad ideas it enables all at once (and even ahead of time). You also won't have to start from scratch, as the philosophy, unlike every new bad idea, isn't unique, and there's likely plenty of literature out there already that you can use as ammunition.


Well, yeah, of course I care. But not for the determination of whether a bad idea is bad. This may be an instance of worse is better, but it is a bad idea regardless, and thus at best an additional piece of evidence against the philosophy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: