Honestly, for some time the security software from MS has been "good enough" for most people. Then again, there's plenty of people that will click "OK" to just about anything that pops up... which is why when one of my grandmother's PCs finally died, I bought her a chromebook... best option for those not technically inclined.
Microsoft has by far the best AV product (from a code quality aspect, at the very least), but I'm not sure if it qualifies as a vendor in this context. After all, MSE is essentially free.
I read an article about the similarities and differences between data art and data visualisation a short while back (couple of weeks?), but I really can't seem to find it right now. Mike Bostock's visualisations, though very visually pleasing, always tend towards the functional side of the spectrum, serving a purpose of enhancing the comprehension of concepts or relationships that lie in the data, rather than being designed for purely æsthetic aspirations.
There's obviously a place for both visualisations and art, though, and the line between the two is not always clearly defined.
I find it somewhat unfortunate that a LWN subscriber link is being abused like that. I don't think such a link should be shared on a widely accessible platform like Hacker News. I find LWN articles to always be of great quality, and the subscription cost is definitely worth it if you can afford it. Also, "subscriber-only" content becomes publicly available after only a week. There is consequently no reason to share a subscriber link like this on Hacker News. The discussion could have waited a week.
"Where is it appropriate to post a subscriber link?
Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared."
Then again, so do the BSD tools. And of course the options are often not compatible between the two e.g. bsd sed's -l enables line buffering, gnu sed's -l specifies line-wrapping length; bsd sed's -i requires an extension (empty for no backup) while gnu sed's does not.
POSIX sed supports exactly 3 options (-n, -e and -f), the latest FreeBSD sed supports 10 (adding -E, -a, -I, -i, -l, -r and -u — the last two being GNUism compatibility options not necessarily available on older versions, they are not on my OSX machine) and GNU sed supports 9 short and an additional 4 long options. And that's not counting the extensions to the sed command set.
True, which is why it is hard to keep up with multiple UNIX implementations and to know what is supported on a given box without having to reach for man first.
He's not really actively involved, though. He's just been on a crusade lately to eradicate unwanted VCSs still used in older projects, including CVS for NetBSD and bzr for emacs. Honestly I'm glad he's doing it, I'm hoping it might make it more likely for newcomers to contribute to these projects. He has posted quite a few articles about these conversions on his blog [0], if you want to read about it.
Wonder if he's tried to evangelize on the OpenBSD mailing lists yet? The smackdown from Theo would be amusing.
To some extent, it's good he's doing this. To another, very few VCSs other than SCSS can really be called "obsolete", just primitive. Most of the time, people who go on about VCS migrations seem to be just bikeshedding and can't seem to give out specific justifications other than listing features that the upstream project likely doesn't need. It's not trivial either, changing a VCS often means changing the entire way a project is structured, for often uncertain gains.
Well, I can pick on OpenBSD's favorite: CVS repository corruption does happen, but CVS has no active integrity checking. You'll only detect it if you go far enough back... or run a conversion program. If you're lucky, you discover while it's covered by backups, but these issues can sleep for years. This caused a migration I was involved with to just become a scrape, as the code at the tip was fine.
(I'd also say that atomic commits are an excellent reason to leave CVS behind, but that can be argued.)
you should never use a vcs that has no ability to ask "is this repo in a valid state", or you may try to checkout the point version released to a large customer and be unable to do so. Data corruption can happen, even on raid drives (we're pretty sure it was a bug in the controller, but that doesn't change the effect.)
He has written a bunch of nontrivial code, and is running that code himself on his own machines (and those donated specifically for this purpose). He leaves it to the community to decide whether they want the products of that work. This seems the antithesis of bikeshedding.
There were three distributed VCS that were in the running for a while. There was git, bazaar (bzr), and mercurial (hg). For a while people were using on all 3, but eventually git won out.
In my little experience with bzr, it was slow as molasses. The only competitive advantage it had against Git at the time was a simpler command line and better windows support (we are talking about the days when git commands were still spelled like git-branch, git-reflog and so on.)
Eventually Mercurial ate Bazaar's pie and now Git is slowly eroding projects away from Mercurial. All hail Linus, who managed to create two open source systems that became de-facto industry standards.
How does it compare speed-wise with git? That was the reason I switched from darcs 8 years ago. Some of my repos go back over a decade, and darcs had problems with them when there was just two years of history.
There was some issue with exponential running times in certain scenarios. I ran into an issue where two repositories were somehow in a state where a pull would never terminate. darcs definitely had problems.
esr knows how much of the open source community's success is driven from social factors as opposed to technical factors. I'm excited with his progress as well.
Well, I've never used OProfile myself, so please take this answer with a grain of salt, but from a quick perusal of their website, and as the project's name seems to suggest, OProfile is a sampling profiler, which usually means it will collect data from counters at given intervals of time, producing a number of statistics over time. This generally implies a somewhat significant overhead, and consequent performance drop.
LTTng, on the other hand, is a tracer, which means it collects events from the kernel (using the built-in tracepoint facilities) as they happen. It's also possible to trace userspace apps (with lttng-ust), or define your own tracepoints. This has the benefit of being much more detailed, and also has a much smaller overhead.
You might be interested in reading the "What is Tracing?" section of the lttng docs [0], which does a far better job at explaining this than I do.
Full disclaimer: I'm am a contributor to the LTTng project.