The first time I read Anathem it was certainly slow to get through the first 200 pages, because I kept referencing the glossary to learn the vocabulary. The 2nd and 3rd times I read it the entire book flowed much more smoothly, and even the beginning was quite enjoyable.
It's not just a linkbait title, it's a linkbait article. It's called "burying the lede"[1]
Here's what a well written lead paragraph should have:
Most standard news ledes include brief answers
to the questions of who, what, why, when, where,
and how the key event in the story took place.
Instead, here's what we got as the first paragraph of this article:
Somewhere in the high desert of eastern Nevada,
a few turns off Route 50 — "the loneliest road
in America" — a station wagon sat parked by the
side of the highway. Before it lounged a young
couple on red lawn chairs. A crudely painted
wooden sign on the vehicle's roof advertised:
"Snow Globes $20."
I skimmed the story to find out exactlywhy the tree was cut down, but that information was elusive. It might be somewhere in the article, but I have no desire to read the LA Times edition of War and Peace.
There's a limited amount of space in the yCombinator title thingie. The LA times provided a convenient title. Did I really do something improper here?
This is a fabulous story. And I happen to know about it because I took the trip to that memorial and was very excited to see it make the front page of the LA Times. I can't believe I'm talking with you guys about linkbait rather than this ancient tree that was cut down by an NSF funded expedition.
I went back and read both the LA Times article and the Wiki entry and, you're right, it is a fabulous story. The two articles are complementary, LA Times has the human interest story, Wiki has the dry facts.
Anyone who is interested in this topic shouldn't be deterred by this squabble over titles.
BTW to answer the title, the tree was cut down 50 years ago because a grad student who was "studying the climate dynamics of the Little Ice Age" asked to have it cut down. Because in retrospect this act became very controversial, the exact reason(s) might have been retconned. One general reason was to study the core rings.
Some people think it was done knowingly, to make a name for Curry, the grad student who made the request. Others say that because the Swiss guy who was trying to get a core sample which is non-destructive, was unable to do so before heading back to Switzerland, Curry genuinely wanted to do this for science. At some point I found a link where some students of his defended him and said it had a profound impact on Curry for the rest of his life.
The thing is, they wanted to study the rings because they thought it was really old.
First of all, the LA Times changed the title from when I originally posted it. The first title they used was stupid quite frankly.
Second, under "sharelines", there are two headlines they suggest using when sharing the story. I picked one. I can't believe you guys are seriously having a problem with it. It describes the story well.
Third, from what I heard when first getting into this story, it WAS the oldest known living organism on Earth. Since then they have found older "things", and having found the Prometheus tree, they have found older tree(s) as well looking in the same area as well as areas that have the same general living conditions.
Fourth, I'm sharing a link, not writing a freaking thesis. I'm glad at least someone on this thread bothered to find out about this story because it's fascinating.
Fifth, for those who are more interested in the story than semantics about titles, there is a hero, Mike Drakulich, from the Parks Service who refused to cut down the tree. He even took his chainsaw with him because he didn't want them to cut it down. They found another Park Ranger to do it the next day. I met Mike's daughter at the memorial. It was quite touching. Here's another version of that story. (Not all accounts say the exact same thing btw): http://www.nashuatelegraph.com/news/816442-196/daily-twip---...
Yes, but since then, there has been a continual stream of updates, unit tests (shockingly, the original version had ZERO unit tests), expanded support for a variety of IDEs, and better code structure.
Also, they author finally got around to actually providing a proper pom.xml file.
Check the commit history for details.
If you're interested in contributing, there's definitely room for expanding the unit test coverage, and also instrumenting the build process to support a code coverage tool such as EMMA or Coberture. Also, I don't believe that the code has undergone a security audit, and I don't see any code reviews for many of the commits, which is distressing.
And documentation! The code does not support javadoc, and in fact, there are only two comments in the entirety of the nearly 1500 line code base!
> And documentation! The code does not support javadoc, and in fact, there are only two comments in the entirety of the nearly 1500 line code base!
This is just _so_ non-compliant with Best Practices that it reveals itself too easily as a parody.
Each class should at least have the default Eclipse comment telling you how to change the template for a new class file, and most methods should have auto-generated JavaDoc for the first version of the method which was written years ago, with nothing filled in.
I'm not sure how that would help. They would have to generate a matching hash on their end, giving them a lookup table to work backwards from hash to email address.
Now if they wanted to supply a list of hashes to the public, then you could check your own without knowing any of the other addresses used to generate the remaining hashes.
Yes, but they would already have your e-mail address anyway. Lookup by hash precludes the case where you're giving them information they didn't already have.
True. I was more referring to it being a confirmation that this is an email address that anyone cares about.
If I wanted to be truly malicious I'd have my online checker return a "Nope, you're all good" and then add that email address to the short list of accounts to go after.
Yes, the title is terrible and the fact that the "meat" of the information is inside a video is worse (no abstract or summary in the linked page).
However what's shown in the video is amazing: Doctors do trial test injecting a form of HIV (modified so that it doesn't cause AIDS) in order to cure Leukaemia from a child.
> The OpenBSD project uses a lot of electricity for running the
> development and build machines. A number of logistical reasons
> prevents us from moving the machines to another location which might
> offer space/power for free, so let's not allow the conversation to go
> that way.
I don't understand this comment. If the choice came down to moving versus shutting down entirely, why is moving an unacceptable answer?
This discussion comes up every time only because some people seem to think OS development is like racking new x86 servers running RHEL.
Many of the machines do not have LOM. They have hardware failures instead. They hang because they get trashed building OpenBSD and ports pretty much 24/7. There is debugging and serial cables going on. Someone needs to push that NMI button and check the LEDs flicker like they should. Reboot them. Constantly update to the latest development version, making them panic quite a bit. Diagnose that. Installation procedure requires console access, monitor adapters, weird keyboards, ... They don't fit in racks properly. There are security concerns. Etc, etc.
It's wrong to think of the machine room as rack space than can be had for cheap somewhere else. It's much more like a lab (with the mad professor living on top, controlling the experiment).
While what you say is correct, Theo's stance on this is still a bit unreasonable. A review should be done to see which systems can be moved or supported by the means of remote power off strips and IP console servers. They should be perfectly willing to move that gear if someone offers them the space. All the Sun SPARC, Alpha and Intel most likely falls into this category. Only systems that someone needs to be physically there to access should be left onsite.
I have donated to OpenBSD a number of times because I believe the project is of great value. In all cases where I used a release (for firewalls mostly) I purchased a CD set.
OpenBSD supports a number of odd and unusual platforms and does builds on them. See http://www.openbsd.org/plat.html. Older hardware can both use a significant amount of electricity and require much more hand-holding than is possible. Virtualization and emulation are not acceptable substitutes because they claim that doing builds on e.g. VAX is one of the best ways to ensure that the code works on VAX as opposed to simply booting on VAX. They also regularly find bugs affecting all platforms that are exacerbated by one particular architecture (think alignment or endianness issues).
On a regular basis, we find real and serious bugs which affect all
platforms, but they are incidentally made visible on one of the
platforms we run, following that they are fixed. It is a harsh
reality which static and dynamic analysis tools have not yet resolved.
We used to maintain SGI boxes for this same reason but we've pulled back. The main benefit we got from older RISC stuff was that they bitched about unaligned loads. If you are going to support any of those models you need at least one box like that in your cluster. SPARC was faster than MIPS so we kept SPARC.
I think Theo could probably thin down the cluster and still be good but maybe I'm wrong and he'll show up with examples that require all of that hardware. That would be interesting because in our case we've sorted out the problems and rarely see things blow up on the RISC boxen.
I was trying to think of a good way to say exactly this, but he says it better (and more authoritatively) than I ever could. Broad platform support is a significant factor contributing to overall quality in products like *BSD, and it's apparent in looking at the source.
As a longtime OpenBSD fan and advocate, this has always fascinated me. I loved SGIs back in the day but they are slow as shit today and unusable for any kind of modern desktop usage unless all you do is write code in a terminal. These platforms survive in OpenBSD land because somebody still cares enough about them to enjoy hacking on them. There's no point in saying "Drop them!" because the devs working on them probably could care less what the rest of us think.
Personally, I do wish OpenBSD could somehow regain the popularity it once had and that support for modern hardware like 10GBE and scaling PF throughput w/ multi-core CPUs would improve. I don't know what it would take to bring people back.
Why should everyone else pay the 20K to subsidize ancient hardware support? If there really is a subset of people who really really depend on this, then they should be forking up the cash to pay for something that could be had for free elsewhere if support was restricted to 95% of the platforms in use by the vast majority of people.
If there is an argument that maintaining this will somehow improve security overall and not just on ancient hardware then I would love to see it. But if the Devs working on it could care less of what the rest of us think, then maybe those Devs should pay their electricity bills to support their toy platforms because I could care less about what they think too ...
That is the argument. Obscure kernel and driver bugs are frequently only made apparent as edge cases on said ancient weird hardware, but the fixes benefit all platforms from a code-correctness point-of-view.
But many of those old hardware platforms can be emulated. So if their reason of existence is only triggering edge cases, there are other ways to do so.
Actually most can't be emulated because emulators don't exist and those which can typically can't be emulated correctly (Sparc emulators for example which are notoriously sparse and bad quality).
Some of the architectures also have different endianess and incredibly complicated peripherals to the cost effective host machines as well meaning that it's actually more power efficient to run native. A headless 100MHz VAXstation for example draws less power than the equivalent host that would be required to provide a full, accurate emulation with peripherals. These aren't arcade machines.
OTOH, the preservation of such historically relevant architectures would benefit enormously from emulation. This is an aspect that should get some attention and which could, possibly, open up another funding avenue to the project as a side effect.
Not really. The OS syscall interface, ABI and the fact everything is abstracted via your C compiler normalises the differences between the machines pretty well meaning you only end up dealing with portability issues.
Portability issues is where real hardware benefits. It's where you have battles of unusual register sizes, endianess, host/network order differences, different memory models and memory protection, different performance characteristics, different timings and different exploits.
Unless the emulation is 100% accurate, including timing, which is a really difficult thing to do (look at the effort MAME goes to), then the benefits over real hardware is moot.
Emulators are also expensive to write due to the above, have their own bugs and don't always recreate the bugs in the real hardware (which are sometimes exploitable).
I was thinking about MAME (more specifically, about MESS). If two groups benefit from a single effort, it seems to be an good investment, even if it costs nearly twice as much.
Also, using emulated hardware could cut down the usage of the real pieces, which could then be better studied and preserved. Doing less builds on vintage hardware is, actually, a good idea.
Emulation isn't guaranteed to manifest the same edge case issues that surface these defects, though.
Then again, it's possible that emulation could surface other edge case issues. That's completely orthogonal to the value of non-emulated archaic architectures for this purpose, however.
Is that clear? It's not as if when SGIs are taken away, developers efforts will seamlessly efficiently shift to amd64. I like to think that "oddities" like SGI are data-points to test against, and help keep abstraction alive by disallowing traps like pretending that everything is an x86.
I'd be interested to hear whether or not this is the case from people closer to such a condition, though (ie: OpenBSD, NetBSD, ???).
If they would rather refuse free hosting and will "not allow the conversation to that way" then it means they want to keep support for some physical hardware that's hard to find in existing datacenters.
This is like saying "I don't care if my arcade goes out of business, I'm going to keep the power hungry cabinets alive even though they only get used once every 3 years."
"... and unusable for any kind of modern desktop usage unless all you do is write code in a terminal."
Sounds good to me. Many times (actually most times) I have no need for a "desktop" metaphor on my screen in order to get things done. I actually get more done big jobs done faster without the desktop metaphor in the way.
"... the devs working on them probably could care less what the rest of us think."
That's what makes them so special.
Perhaps in the long run the most "powerful" and sought after computers will not be the ones with the latest chips, but the ones that the user has the most knowledge of and control over.
Can you imagine the old-timer reminiscing: "Remember when computers didn't have backdoors built-in?" or "Remember when you did not have to pay for a license to write programs for hardware you bought?"
A mips32 port would also be very desirable for all those shitty little routers with their ancient Linux kernel, but sadly nobody is working on that at the moment.
So, should the network and firewall OS that many people claim OpenBSD is, slash its support for MIPS devices?
There's probably a few hobbyists that still use SGIs. Octeon is mostly commercial vendors, who could pay for all of OpenBSDs needs out of their fancy executive toilet paper budget. And there's one OpenBSD hacker who uses Loongson, even GNU cult leaders use GNU/Linux on Loongson instead.
The important part of sdkmvx's comment is that those SGI workstations help find bugs that also affect x86 users. The more diverse ecosystem makes it easier to reliably detect and reproduce tricky race conditions, endianess bugs or memory management mistakes.
But if the project stops keeping obsolete build machines around, they would save on electricity bills. Hackers could still hack, they just wouldn't be killing the project with legacy support costs.
I think if you're openly soliciting donations under the premise that they're essential to keeping the project running at all, you can put up with the people you're trying to get money out of asking questions to ensure that the money you're asking for really is as needed as you say it is.
Especially since he is almost always technically correct--the best kind of correct.
What OpenBSD foundation really needs is a tactful and charismatic person to act as firewall and pf between Theo and the people with overflowing bank accounts, who are more accustomed to dealing with obsequious salesdroids than a person who is not only ten times smarter than their entire golf group put together, but also so aware of it that he cannot hide how much of a waste of time it is for him to suck up to any one of them, no matter how much he could use the cash.
Do you think Apple would have gone anywhere if Wozniak was the one talking to all the investors?
No, what OpenBSD needs is a broader FOSS community that doesn't turn every bump in the road into a referendum on bruised feelings from years ago, and recognizes its indebtedness. I type 'ssh' how many times a day?
Not that I don't understand those feelings. I've spent enough time lurking on openbsd-misc to have seen Theo and friends be beastly. But never without some provocation. And one might wish better impulse control on any number of online personalities.
(And if you find this rude, note that I'm not involved with OpenBSD -- not even on the mailing list anymore -- so blame me, not 'the OpenBSD community'.)
I don't see why they're proposing a all or none situation when they could choose to move and in doing so, limit the number of platforms they support. If anyone complains about a specific platform being dropped, well have them pay for the overhead associated with it.
Theo says it's because if they drop $platform, they'll lose the devs that like working on $platform who quite possibly work on @other_platforms as well.
Change is hard, but I think it's worth a shot. And you never know maybe spending more time focused on real needs will create a better product from all the extra attention given to things that are work worthy.
Realize that funding for OpenBSD has been an ongoing problem for decades now. Theo would like to focus exclusively on managing the project, but instead he has to keep dropping everything to deal with the funding problem.
And, the OpenBSD development team has contributed a lot to the software community, so it's extra frustrating not to get enough support back.
I'd expect that at some point it just stops being a fight you want to keep fighting.
I've seen a picture posted on Slashdot how they server rack looks like. There are many very old machines, I am sure that at least one reason is fear that they break during transportation.
Why not move the machines, and if the Amiga breaks down and they can't find a replacement, end Amiga support? I mean, that's not a wonderful outcome, but what would you prefer to see given the following options?
a) Shut down OpenBSD
b) Shut down Amiga support in OpenBSD
I mean, is it even a hard choice?
Besides, if there are many developers who like developing for Amiga, surely they would be able to find a replacement?
The Amiga port isn't live. It hasn't been maintained for a while IIRC.
There are two important points that shouldn't be forgotten about aggressively pushing cross-platform: it retains developers and exposes bugs. There's a great deal of usefulness behind it, beyond simply making it obvious that the workstations we get today are shit.
Ok, Amiga was just an example I pulled off the top of my head.
I am not suggesting all legacy platforms need to be cut. I'm suggesting that it's possibly an acceptable risk, and also if a replacement UltraSPARC simply cannot be sourced, there can't be that many developers working on UltraSPARC anyway. (Just as an example)
I completely agree with your association. In my view OpenBSD is very plainly holding itself hostage for $20,000 cash and expecting everyone to accomodate them.
All three options exist, but the maintainers pretending that option b doesn't exist at this point in time increases the probability that option c will succeed.
If the pretense doesn't work, be very much assured that they will go with option b.
That image has been present on the lower right corner of the openbsd homepage for a long time. Given the name of the image, I imagine it's been present since at least since 2009, but my (fuzzy) memory wants me to believe there has been an image of a rack there even before then.
It reminded me of a post-it I left in the company lab with a diagram for how to do proper gigabit cross-overs. I could still find it there five years later after they rearranged the lab several times.
"Automatic MDI/MDI-X Configuration is intended to eliminate the need for crossover cables between simi-lar devices. Implementation of an automatic MDI/MDI-X configuration is optional for 1000BASE-T devices. The assignment of pin-outs for a 1000BASE-T crossover function cable is shown in Table 40-12 in 40.8."
It seems likely that they don't trust anyone else to have physical access to the machines for security reasons. Their threat model probably includes national governments.
My first "real" job was in the mid-90's; I was the first technical hire at a small Chicago ISP (EnterAct) that grew into a relatively large ISP (when I left, we were default-free peered to several tier-1 providers and had more POPs than I can name). It was great, and the team that started it --- two Big-5 accounting firm programmers --- was inspiring, particularly when it came to business strategy.
Anyways, very early on, EnterAct managed to maneuver into a reputation for premium customer support. We got that reputation by doing some concrete things differently than our competitors: we staffed an appropriate number of CSRs, trained them to be nice to customers, did a lot of gratuitous tech support for basic computer problems, and were flexible about resolving billing disputes. Sadly, a lot of those things were differentiators at the time. A couple years in and we were essentially able to hang "best customer support" on our list of features, and eventually we became the most popular ISP in Chicago largely based on that.
But something I came to notice pretty quickly: the things we were doing to earn that support reputation stopped being empirical differentiators pretty quickly. Our largest competitor, run by Karl Denninger, did us a continuing series of favors by pissing off their customers. But other large regional ISPs pretty quickly learned not to set fire to their customer base, and, by the end, I think our customer service was pretty much at par for the whole area; we were no longer truly different based on support. The reputation, however, never left.
That observation has stuck with me for my entire career. I think about it all the time. It's banal, I know: "early impressions count a lot", but there's a little more to it than that: you can weaponize an early impression by turning it into your market positioning and having some message discipline.
I left EnterAct for a job in Calgary with a company called Secure Networks (SNI), doing development and security research. For the year prior to leaving EnterAct, I had also been working with the OpenBSD project, mostly by writing all their security advisories, but also doing a bit of part-time security research. SNI operated the world's first commercial vulnerability research team, and had a very close relationship with Theo; we had a full time employee who had essentially led the first OpenBSD security audit. I went drinking with Theo many times, and vividly remember hanging out in his basement with Tim Newsham eating bad pizza and trying to find vulnerabilities in Daniel Bernstein's qmail (we found one that would work if integers were 128 bits, but ironically missed the LP64 bugs that Georgi Guninski found; it was 1997, though).
This is all a long prelude to a simple point, which is that I think OpenBSD's reputation for security works in a very similar way to how EnterAct's reputation worked. OpenBSD started doing something very different than FreeBSD, Linux, and (particularly) NetBSD: they did an OS-wide audit for vulnerabilities, and aggressively fixed apparent bugs whether or not we could demonstrate that they were exploitable. That was a great move. But it was so obviously great that pretty much everyone (with the possible exception of NetBSD) quickly adopted the practice.
Among security research insiders, OpenBSD's reputation became a little bit farcical. Not that OpenBSD was comically insecure --- it wasn't --- but that its reputation so far outstripped its actually differentiation. People found a bunch of vulnerabilities in OpenBSD and laughed as the claim at the top of the OpenBSD changed from "no vulnerabilities" to "no remotely exploitable vulnerabilities in the default install".
And at some point in the last 10 years, didn't OpenBSD's distro servers get owned up?
I'm sure the OpenBSD project would like its threat model to include NSA. But OpenBSD is not a meaningful ally in a contest between you and NSA. NSA wins that fight. OpenBSD's userland was much stronger than FreeBSD's in 1999, but I'm not sure I think their kernel is stronger in 2013, and that's probably what matters more.
Let me wind this bloviation up with a caveat: one thing a reputation for security gets you is a feed of talent that is interested in working on security problems. OpenBSD certainly got that. So for instance, OpenBSD's developers designed and built privilege-separated OpenSSH. There is a lot of good security work that has started inside the OpenBSD project, and I don't mean to talk any of that stuff down. I'd just be careful about taking the project's overall reputation to the bank, especially if you have serious adversaries.
Sorry for hanging this sprawling comment off your (simpler) point; I just don't want the root comment on the thread to be me talking down OpenBSD.
I know OpenBSD's reputation is primarily security, but I use it for a different reason. It's simple, stable, and doesn't break.
Back when I was in high school and I had a lot of free time and all that, the various incarnations of Linux were a delight. Even after that, I still went with it out of inertia and spent many evenings tweaking Gentoo.
I eventually just goddamn gave up. I got sick of every upgrade breaking something in my system and then especially got sick of deciding between figuring out how to use wpa_supplicant and installing NetworkManager which screws up my network settings as soon as I plug in the Ethernet cable while I'm still on my wireless. In a flight of rage I thought ok, I've had enough of this crap, and went the OpenBSD route.
Seriously, it has all the nice parts of Plan 9 while still actually being able to run all the tools I need. I still have Linux and Windows boxes for the odd tools that don't work on anything else (I do embedded systems for a living, and there's a lot of vendor lockdown there), but for my day-to-day workstation, I found nothing better.
In 2009, our development team lost a whole 10 hours to a degraded Linux mdadm RAID1 that wouldn't rebuild due to an obscure error after a digger severed our power and internet connection. No internet access as power came up first so no access to online help. mdadm is buggy. Documentation sucks. Error messages suck. Only recourse was a full restore from tape which took a long time. This was the last straw after over a decade of dealing with this crap from network dropouts, laziness, half-arsed features, distro wars, politics and churn.
Some previous Unix experience in the late 1990s with OpenBSD on an old SparcStation 5 (the only thing that would run on that machine nicely) jumped into my mind on the way home. It had that warm, fuzzy, well-engineered, well-documented feeling about it, like an old HP RPN calculator. Got home, downloaded it and installed it on my laptop, replacing Ubuntu.
4 years down the line: one happy person with the same laptop running 5.4 still with that warm, fuzzy, well-engineered, well-documented feeling.
Not once has it let me down. Not for a minute in the 4000+ hours I've been using it. It just works.
And OpenBSD has the best-written man pages in all of Unix.
When I got thrown in the deep end with Solaris, many years ago, I'd read the Solaris man page for the options, but first I'd read the OpenBSD man page to work out what the hell the command was for and why.
The most offensive man pages are GNU project pages that effectively say, "for real documentation read the info page". Which, as someone that can never remember how to use info, is frustrating and just serves to piss me off...my first thought is "and a big fuck you to you, too". And then I look it up online so I don't have to read how to use info before I can read how to use the command I was looking for docs on.
I don't know if this is common practice anymore...I don't remember the last time I saw a defective man page like this, but I still remember it with great anger. I love GNU, but I hate the kind of condescension it takes to try to force someone to use a different tool because you believe it to be superior to the standard tool (when it's really not; I find info pages to be obtuse to create, and difficult to read).
GNU's stance on man pages is entirely correct! For real documentation, read the info page, but you rarely want real documentation, you just want a quick example or the command-line invocation syntax, or what a particular argument does. And 99% of the time, that will be in a man page.
The problem lies when you want to find something 1% of the time, and it's here that man pages become sprawling unindexed messes. For example, take a look at the man pages for perl or zsh: you'll have no chance finding anything, as those programs are so large that they need a wealth of documentation to go into them. At the same time, the info page for ls contains the things you rarely need to see such as exactly how things are sorted or the minute details of timestamp formatting. If this were all in the man page, you'd complain that you couldn't find anything in it.
I don't know, I always found the perl and zsh man pages to be rather pleasant. They were sprawling, sure, but having long ago given up on brevity, they have no fear of meticulously describing how a feature or flag works. And they're just man pages, so you don't need to read the manual-for-the-manual first like I always find myself doing when I'm forced to use info.
This fundamentally goes against the Unix philosophy though which is to provide small well-defined parts from which you can construct a complete solution from.
If you need a complex manual for a complex program, something is wrong.
What does this even mean? The name GNU itself is a joke.
I mean, the project wants to create an operating system that looks like UNIX, acts like UNIX, smells like UNIX, but from scratch with appropriate license that allows usage and access to source to anyone - so that the project doesn't get into legal trouble from whoever actually owns UNIX?
So drunk Stallman in 1981 says: "hik, let's call this, hik operhikating system GNU, hik, because it's not UNIX hik, but it sure looks like one, hik, but it's not, hik, but it kinda is hik, but it's not theirs hik it's everyone's hik". That's how I like to imagine it happened.
This was my impression as well after using OpenBSD, and when I pointed that out a while back on HN, it was pointed out that the core linux manpages have gotten much, much better in many cases[1]. In that respect, it may be another example of the GP comment.
1: My go-to example was always ifconfig, but linux's manpage for ip(8) really isn't that bad, as is actually the linux equivalent. Quality probably varies quite a bit based on the package that supplies the utility though, while OpenBSD's quality is fairly universal.
I wonder where the best place to report manpage bugs to is - for things like the builtin commands that may not have a single upstream. Does Ubuntu pull in a manpage update from Fedora? What about the other way around?
> And at some point in the last 10 years, didn't OpenBSD's distro servers get owned up?
Yes, a cvs bug I believe. No kernel will protect you from bad user-mode code that really wants to execute everybody's shell script.
> Among security research insiders, OpenBSD's reputation became a little bit farcical.
I spent lots of time looking through the OpenBSD Kernel, togheter with FreeBSD and Linux kernel. It was my job for years, looking for vulns and writing exploits for them.
I still admire the OpenBSD Kernel for their simplicity and tidiness.
No comparision to FreeBSD kernel-side. FreeBSD kernel often have commits of several hundreds of KBs of mostly unaudited code. They still don't enable stack-protection today in 2014. It's a joke. My windows phone had stack protection in 2003.
No comparision to Linux either, the Linux kernel is so huge, so full of code that even if it's way more audited than FreeBSD, there are still vulns lurking everywhere and exploits for linux kernel came out almost monthly. Probably it's the reason it have so many security features, more than OpenBSD nowadays.
Windows, their kernel is a work of art. Microsoft only have to fire the guy that says "hey I got a great idea lets parse some random protocol inside the kernel"
But I disgress. OpenBSD is still very good. Very safe in the default install. It will protect your firefox from being owned by a NSA-sized enemy that really want to hack you? no. But the problem is in the browser, not in the kernel. Don't use a big browser. It's not in the default install :)
Thomas, thanks for that comment. If there was a "best of HN", this comment should be a part of it. Good storytelling, a great business lesson tidbit for all of us, interesting technical discussion, and a good reality check.
I don't recall if I stayed to the bitter end, but I started making provisions for a move after I ended up arguing with Karl over whether inbound mail was being corrupted, maybe around the time of a conversion to (from?) maildir.
I think I started when it really was only Karl, was Dawn his first hire? Hmm, I probably still have the t-shirt as well.
Totally off-topic, but I remember those days. At some point, I got a copy of my customer record and saw "MCS bailers" in the referral field. Got a good chuckle over that. I don't even remember what KD did, but I remember choosing EnterAct because you were one of the last ISPs in the area that offered a dial-up shell. That was in the days when I had a Commodore 128 set up in my home office to mess around on.
Why not just do what Linus Torvalds does and simply trust his hash function? For anyone to tamper with the Linux kernel sources and have him not notice they'd have to generate a SHA-256 collision and somehow get this change past thousands of clones of the repository.
If he's asking for money from me, I would like to know why it's not an option. The root of the issue being raised is power/space, so I'd definitely want to know why I'm forking up for something the project could potentially get for free.
It's not a big deal, and I don't expect him to go into detail. He just won't get a cent from me without elaborating, and that's OK. I'm not mad, and I understand he has mis-givings. I just don't think that answer is acceptable enough for me to donate, but that's my subjective opinion (and not everyone else's).
> The more transparency you have in your discussion, the more supportive people will be.
I think that's total, obvious nonsense and if you need to be convinced, here's an exercise: consider how much money the average nonprofit would raise if people knew where all that money went.
That is a nice tool, thanks. Someone else posted something similar recently and it wanted to charge $250 for membership or something like that.
The web is bringing a lot of good transparency to nonprofits but there's still a lot of repugnant wastefulness and avarice that often isn't captured well by a 990 form. (publishing salaries is pretty huge, though)
I stand by my point that the more a person learns about the average charity the less they're going to want to donate... transparency doesn't magically lead to supportiveness. And wanting a project to account for every watt of electricity is just completely silly.
edit: transparency is a way for better charities to look good relative to poor ones, yes, but all things being equal, it's a negative for fundraising: as with business and government, a lot of what goes on in ANY organization is ugly to look at and is bound to turn some people off. (none of that is an argument against transparency itself, let's just not kid ourselves about its usefulness for raising money)
The more transparency there is, the better the legitimate charities look and the less likely people are to throw money at what turn out to be obvious scams.
I get your point, but I think you're generally wrong. My criterion around that for donating is not, "Is this place perfect?" but "Is this place materially more screwed up than any other organization?"
I'm sure there are some people with unrealistic standards that would not donate at all. And I'm sure that there are plenty of organizations that take advantage of a lack of transparency to do dubious things. But the solution to that is more transparency. And more analysis of the transparent information, so that people can easily contextualize it.
Which is the point exactly. People DO have confidence in the nonprofits they donate to. It might not be warranted at all, but the nonprofit has actively built confidence for their audience.
The average nonprofit isn't worthy of the money they get so unless you are trying to say OpenBSD is not worthy of the money and the only way they can get it is to hide the details then I don't get what your point is.
Probably, but then again if he wants my money he better explain why he needs it and how he's going to spend it, doesn't he?
That being said since OpenBSD is all about security maybe that's the reason they don't want to move the servers to some place where they won't be able to monitor physical access to the machines. That's pure speculation though.
If I'm donating I would like to know exactly where the money is going, and what options have already been explored. OpenBSD should have referenced, full documentation about these things if they want to maximize donations.
Apparently, there isn't very much documentation/open accounting, and they aren't willing to discuss options to reduce the bill. That doesn't inspire confidence.
> Apparently, there isn't very much documentation/open accounting, and they aren't willing to discuss options to reduce the bill. That doesn't inspire confidence.
It is a lot of work for a small team to itemize and publish every expense, but some rough breakdown of monthly expenses that my donation would be going towards would really help.
If their books are clean, this is actually pretty easy. Just pulling an annual operating budget should be much easier, if they have good financial practices and controls in place.
They're not looking for a lot of smaller donators in this specific instance (although I'm sure it's appreciated), but rather one large Canadian company to foot the bill and on that company's books for accounting purposes.
And suppose OpenBSD wanted to know exactly wtf you were planning on doing with OpenSSH after you downloaded it... what servers are you planning to connect to, what keys are you planning on using? You know, if you're going to use OpenSSH they want to know exactly what for. Just leaching off the project doesn't inspire confidence.
OpenBSD releases quality software that all of us use EVERY SINGLE DAY as far as I'm concerned Theo can take the money and buy a yacht with it as long as they keep doing what they are doing.
Yes he can(and probably should) buy a yacht. What you're missing though is that
by not being transparent they miss out on a lot more contributors. So it's not
a super smart move.
Also keep in mind that a lot of contributors might not use OpenBSD, yet they might
be interested in offering some small amount if they believe it's for a good cause
and they know where that money is going.
...there isn't very much documentation/open accounting...
So your feeling is similar to that toward a homeless dude? You'll give him a sandwich but not cash? If they're saying power is the shortfall, maybe we just need to buy them some solar panels or wind generators or something.
Not sure how much they are hiring developers. They said the number they need annually is more inline with $150,000. I'm not sure if that's in addition to current donations or total but you wouldn't be hiring many devs with $150k - $20k for power- wherever Theo needs.
Especially if the fate of OpenBSD as it stands is hanging in the balance. Depending on who is offering, this may be because of the uncertainty of whatever arrangement is being proposed. For example, if a smaller company or an individual offers to foot the bill, what happens if the company/individual later has a budget crunch of their own, or decides to cut ties?
Of course, if an IBM/Apple/Google/etc offers space/power, it may be a less risky proposition.
I'm pretty sure that a lot of the older hardware at least require some degree of hands on administration. Rebuilding an testing a new kernel on a VAX with no remote administration features would slow things down. Having stuff easily available makes a lot of sense to me.
The chart that is included is ridiculous. There's no label on the Y-axis at all, and zero indication of what it is we're supposed to take away from it.
why don't tesla use charts comparing searches for "internal combustion engine car" vs "electric car".
no one searches for the term "desktop ide" because the only people that call them that are people in the cloud ide marketing game. the terms are junk. the axises are junk. the chart is junk.