Hacker News new | past | comments | ask | show | jobs | submit login
Classic Usenet posts on computer architecture, operating systems and languages (yarchive.net)
267 points by caned 3 months ago | hide | past | favorite | 59 comments



The older I get the more I feel like Usenet was/is a lost opportunity. With a few improvements it could have taken the place of Facebook (not the original connecting with friends version Facebook) and Reddit. It could have been a great option for local business promotion and news distribution, and the same segmentation into groups would map almost perfectly into subreddits.

It is decentralized, multiple clients where available and you could segment into as narrow topics as you'd like.


@asjo over at koldfront.dk is still using NNTP for all sorts of things (RSS aggregator, blog engine, ActivityPub server): https://koldfront.dk/just_call_me_mr_nntp_1871

Indeed, it was a lost opportunity. The protocol os easy to understand, and it plays well with decentralization as you say, and with self hosting.

With Python 3.12 out, its nntplib module is scheduled for deletion/removal in v3.13. What a pity.

The NNTP protocol surely deserves a revival.


Usenet has a few hard problems, no? Mostly related to moderation. You want to be able to edit or delete content, either your own, or, if you are a moderator, other content which turned out to be illegal later (which already implies content moderation before allowing content to be published). Spam is another big problem.

To avoid impersonation, you'd have to sign everything, which I guess is possible, but experience shows crypto done properly is a huge pain. We already see how unsatisfactory this works with email.

Plus, does usenet scale as well as, say, reddit? Sometimes heated threads are more like comment streams with probably dozens of comments per second.

How can these be solved with a few improvements?


> You want to be able to edit or delete content, either your own, or, if you are a moderator, other content which turned out to be illegal later (which already implies content moderation before allowing content to be published).

Or you accept the fact that what you write is a permanent record. (Local admins can delete messages from their servers if there is a problem with the content.)

That being said, there are Control messages (RFC 5536 § 3.2.3), and one of the things that they can do is cancel an article (RFC 5537 § 5.3). You'd probably want to authenticate the cancel messages of course:

* http://fi.archive.ubuntu.com/ftp.isc.org/pub/pgpcontrol/READ...

* https://en.wikipedia.org/wiki/Control_message

Worth noting that you can't force servers to honour the cancel messages and delete the article on their local storage.


A parallel copy of Usenet is used for piracy, with a posting volume measured in double-digit gigabits per second. It can probably handle Reddit.


That's regarding the size of the entries, not the frequency though. Many small comments in a short time frame will have a huge overhead.

Also, there is a reason why binary content was first frowned upon, then became essentially a paid feature. It's expensive.


Impersonation and spam are things you find on other ways of running a forum.

On Usenet you can easily stop following a group or person.


Isn't spoofing an email sender address much easier than taking over an account on a web-based forum? I think forums are very robust against that, while email and usenet are not.


Plonk


There was some minor discussion in this direction in the thread about email 2.0: https://news.ycombinator.com/item?id=40392709

Bad service drives out good service just like bad money, for some reason. Centralized beats decentralized because convenience. The convenience is what they're paying to entice you to come into their home and let them eat your brains, and every single person falls for it, probably including you and me.

Classic Usenet is still around. If you have an email address, you can get an account on Eternal September. There aren't that many users. You can be one of them.


It's not just convenience. Having a centralized authority - with all its downsides - comes with some crucial advantages. They can:

* provide identities

* moderate content

* vouch for integrity

* enforce a consistent set of features

The same problems haunt email. It's also why Marlinspike was always against federating Signal. Decentralization comes with some hard problems.


True enough but we know what scientology believes because of USENET (alt.religion.scientology). The church of scientology literally ran out of lawyers for DMCA takedowns, which don't work on USENET because it is structured like the internet.

NNTP needed things to be sure but the above issues have technical solutions that seemed out of reach not because they were too complex to solve, but is seemed as if society itself wanted multimedia web pages like a coke addict.

Also things like digital identity seemed to flag as innovation for such things stalled out around the time HTTP became rising. I remember moving data between mainframes from different vendors and what was once a set of problems that seemed intractable suddenly fell as TCP/UDP/IP took off with a suite of solutions that worked so well together.

USENet was a better place to get good information compared to web pages at that time in parts due to bad web site designs and DMCA takedowns, neither of which was an issue for NNTP. I like Reddit because (at least old.reddit) is similar to USENet in how it is structured.

Had NNTP adopted HTML and Netscape written a browser for news groups using HTML, the so-called 'world wide web' and it's anachronistic client-server model might not have effectively displaced NNTP.


This!

I still hands on a couple of groups though the traffic there is getting thinner with each day...

It was simple. I wasn't forced with account (though would be nice/useful nowadays I guess with all the impersonation and whatnot), threading was awesome. Scoring+plonk was even more amazing. There was a problem with syncing what was read and not but that could be easily improved now...

I think I mostly miss scoring - it was such an amazing tool to manage/filter out desired content...


I hope the last post is in alt.obituaries for usenet itself. Unfortunately it will probably be a post about a free ebook download cross posted to half of usenet.


There was horrible arrogant gatekeeping for new feeds, especially if you asked for a moderated feed.

Those gatekeepers never saw the train coming (web) and then it was too late.


That's more of a social problem than a technical one. Although iterating on the technology ultimately circumvented it, it could have been addressed without completely throwing away usenet.


Usenet was never thrown away. It's still there if you want it, but you'll have to pay to use it.

https://www.usenet.com/

What changed was the user profile. Late 80s/early 90s internet was used almost exclusively by academics and engineers. Mid-90s onwards it was open to everyone, and Usenet access was a free feature of most dial-up accounts.

Usenet was ground zero for that change, because university and corporate admins had to deal with a lot of new groups - including those devoted to binaries, and especially porn.

There was a lot of gatekeeping, but the useful groups either migrated or disappeared under spam, and the conversations moved elsewhere.

HN is probably the closest thing to the original culture, but HN's single page design means you don't get the intense focus and conversational momentum of Usenet's professional groups.


Back in 1998 my ISP dropped NNTP because it was pretty expensive to run, it required large, fast SCSI disks and ate quite a bit of bandwidth, constantly. At the same time rather few customers used it, so it was an easy decision for them to just decommission that server.


I recall my ISP doing the same eventually around the same time. At first, they stopped carrying binary groups and, later, all of Usenet.


That must have been before the high-bandwidth binary side decoupled from the forum side.


Yes but no: my ISP never carried alt.binaries


Current text volume is on the order of a few gigabytes per day, including spam.


Way back then my ISP had a T1 shared among hundreds of customers. :-D


Why? It works great. Since google dropped usenet access through the web, spam is down, discussions are great, and if there is no discussion start one. I'm enjoying usenet immensely!


How do you get set up and started?

I could trawl through a few blogs of course, but I'd be interested to hear what you do yourself for a current snapshot of usage. I think it sounds interesting myself, and have read some quite convincing propagandastic material from Harley Hahn about it working through some Unix book a little while back.


eternal-september.org is a good, free option. But the frequency of usenet posting (at least on groups that I still look at) is pretty low...


Thanks, will have a look there.


One of the Usenet postings that still gives me the chills is this on from Berlin November 10, 1989:

https://groups.google.com/g/eunet.politics/c/LbrVEM7zp-Y/m/a...


"On the subject of reunification, the next day on the Today program on BCC Radio 4 they were interviewing various "notable" people about the situation. A very interesting comment was made by some notable Frenchman that Germany has only ever been unified between ~1890 and 1945 and it wasn't exactly as runaway success then."

1980's savagery could that even be typed today in polite circles? lol


I saw this on HN a year ago[1]. I was amazed to see people discuss this event that I only knew from documentaries just like we discuss so many things on HN.

[1]: https://news.ycombinator.com/item?id=35937637


Thats madness. I sent it to my dad who lives in west berlin since 1950


That thread is an interesting read. Predicting the future is hard.


whoa, I just learned usenet was started in 1980. this is incredible


https://yarchive.net/comp/sandboxes.html is an interesting example of Clarke's first law: " When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

The post, from Theodore Ts'o, argues that software sandboxing is, if not quite impossible, rather difficult: in his post, he argues that either the user or the sandboxed program has to configure a sandbox. The user won't know what to do, and the executable can't be trusted. The author neglects to consider a third possibility: the OS dictates the shape of all sandboxes and software conforms to fit. This is model we use in Android and iOS and increasingly even on conventional Linux (via snap and flatpak). Sandboxing is not only possible, but essential.

Anyway, being wrong is no great crime. Everyone's wrong sometimes. The author went on to do great things with the Linux kernel. It's interesting to think of why smart people are wrong and to be skeptical of claims of impossibly.


It is not even that. All you need is the program to present a “understandable” manifest of its requirements and you can choose to accept or deny. Standardized sandboxs just facilitate ease of use and analysis, and enable application developers to target easy-to-understand security models.

The problem with the post is that it assumes that the most important thing is making sure that all non-malicious programs run no matter how convoluted. It is the same thought process that makes people say nonsense like formal methods are impossible due to the halting problem. No, we can just reject unless you make it easy to analyze; halting problem averted.


> It is not even that. All you need is the program to present a “understandable” manifest of its requirements and you can choose to accept or deny.

Sure. And if these understandable manifests contain requests only from a predefined set that the system provides, we call these requests "permissions" in the iOS and Android sense. You don't get to request arbitrary things and ask the user to approve them.

> The problem with the post is that it assumes that the most important thing is making sure that all non-malicious programs run no matter how convoluted

Agreed.

> No, we can just reject unless you make it easy to analyze; halting problem averted

Yes, and it's this idea that makes minimalism essential in OS design. The narrower your contract, the more flexibility you have to change your implementation.


Agreed. It was unclear from your original post if you were talking about coarse-grained sandboxs or more fine-grained systems.

To clarify, a system that defines coarse classes of sandboxs could declare a class of sandbox for non-networked applications, a sandbox for games, a sandbox for applications with storage exclusively for configuration data, etc. Such a system is different in many respects from a system that defines relatively fine-grained composable requirements. Obviously, a fine-grained system could be composed to present a model at the coarse abstraction level, but the way you target a application differs based on what abstraction level you are targeting.


Okay, then every program demands every permission. Your choice is to let the program use your camera or to not use the program. And some of these programs are probably very important for your livelihood, so we know what the outcome is going to be.


I miss rooting my Android phone and spoofing access to the GPS or camera.

I could pipe arbitrary bits to any application.


You can also profile the execution in a less trusted environment and use that to define the sandbox, like running Linux in NSA SEL learning mode. This allows a black box approach to systems monitoring which works well with secret, forgotten and proprietary systems and is good engineering practice for most systems anyway, whether hardware or software, as anomalies are often highly detectable.


You can, but it doesn't work very well.


Au contraire, it works very well in many domains of engineering (source: I use it all the time). I suspect you may be viewing the general approach through a narrow lens of one particularly demanding requirement (targeting "absolute security"), and comparing an existing approach with input costs that are often considered high or unrealistic (eg. specialist manual configuration).

For example, if the target system is a software system, and the baseline is "no extra security", then running a full codebase coverage test battery across software with network activity monitoring, computation monitoring, memory monitoring, kernel API monitoring, filesystem monitoring, etc. is going to get you a really strong profile of what isn't used to reduce the effective attack surface. This can be done at multiple levels: system API restriction, filesystem restriction, resource restriction, firewalling, VLAN segmentation, intrusion detection system ruleset generation, etc. This is awesome versus manual config, as it is free, precise, adapts with upgrades once plugged in to CI/CD, and requires zero specialist humans... who often function to perform similar processes in an iterative fashion on a best-effort basis.


For those who might think that NNTP does not scale, keep in mind that in its heyday Usenet was not the only NNTP network. There were countless niche NNTP servers not necessarily peering with Usenet. Some niche NNTP communities continue to thrive to this day.

A small closed network of just a few peers can service hundreds to thousands of participants. Social media architecture has accustomed users to having the whole world flooding their feed or inbox, when it need not be this way. This is why I like NNTP. The lack of overwhelming mass participation is a feature, not a bug.


For offline usage:

https://yarchive.net/downloads/

If you use something like Midnight Commander under GNU/Linux or BSD, you can directy read the files without extracting them, it's handy.


I wish there was a way to sort these by date.


You can paste this into your console :-)

Not entirely sure that I parsed all the dates correctly. Simply used the first occurrence of "Date: **" for each link.

https://gist.github.com/Bewelge/1f42c4ba999128ae1ded6f0ecc63...


> There is also a vicious circle here, because the primary audience for a new programming language is existing programmers -- people who are already accustomed to using programming languages with English keywords. Folks who have trouble with English keywords probably aren't working as programmers, so their voices are not heard.[1]

This is a problem way beyond english and programming languages.

[1]: https://yarchive.net/comp/english.html


I can't find the thread or email archives any more, but I got flamed for daring to abuse the Internet for commercial software distribution purposes in July 1992, for posting this announcement of SimCity for HyperLook on NeWS for SunOS 4.1, and making it available on ftp.uu.net.

You could download the fully functional demo via ftp from ftp.uu.net, but it melted your city after a few minutes. Then you could buy a license key over the phone via an 800 number with your credit card and immediately unlock it, and they'd optionally mail you a box with a floppy and printed manual for an additional charge. There weren't any https or many http web servers at the time, and it was unwise to send your credit card number via email.

Before that time, that was strictly prohibited by the Department of Defense's (DOD's) official ARPANET Acceptable Use Policy (AUP), but around 1991 the National Science Foundation (NSF) lifted the restrictions on commercial use of the NSFNET, however everybody hadn't gotten the memo by the time I released SimCity commercially for Unix in July 1992, so I got flamed of course.

Rick Adams, who was on the forefront of commercializing the Internet, gave me an account on uunet to distribute it via anonymous ftp from ftp.uu.net, so he was fine with it, and I ignored the flamers. A huge amount of usenet uucp traffic was routed through the uunet hub, so I often got so some strange misdirected emails to don@uunet / uunet!don, but never any with credit card numbers.

https://donhopkins.com/home/SimCity_HyperLook.gif

https://donhopkins.com/home/HyperLookSimCityManual.pdf

https://groups.google.com/g/comp.windows.x/c/ukCskm_x410/m/G...

    Date: Jul 26, 1992, 12:11:53 PM
    Subject: SimCity available via ftp

    [...]

    SimCity Ordering Information
    ----------------------------

    This version of SimCity will run in demo mode (you can play for a
    while, but you can't save your city, and after 5 minutes, something
    horrible happens to your city), until you get a license and install a
    valid key. To get a license, contact DUX Software at: 

    [...]

    To get the most out of the game:

    1. Get a license key!
    It's cheap, and you'll get a manual with lots more tips!
    2. Save your city often.
    You have to have a key to do that, though!
    3. Print out your city periodically. But don't kill too many trees.
    You can even edit the city map images in the HyperLook drawing editor,
    annotate them, print them out, and save them as drawing or EPS files.
    4. Don't forget to eat.
    Keep in mind the closing times of local restaurants, or keep lots
    of munchies on hand.
    5. Have fun!
    If things are going bad, remember not to take it too seriously,
    it's only a simulation!
ChatGPT recalls:

In the early days of the ARPANET, it was indeed prohibited to use the network for commercial purposes. This prohibition was formalized in what was known as the "Acceptable Use Policy" (AUP). The ARPANET, funded by the U.S. Department of Defense and managed by the Advanced Research Projects Agency (ARPA), was intended for research and educational purposes only. The AUP strictly limited the use of the network to activities that supported government-sponsored research and education, explicitly forbidding commercial activities such as advertising or distributing commercial software.

The relevant rule was part of the terms of service for ARPANET users and was enforced to ensure that the network resources were dedicated to academic and research endeavors. This prohibition was reflective of the original intent behind the ARPANET, which was to facilitate communication and collaboration among research institutions and government bodies.

The change in policy came with the commercialization and privatization of the internet in the early 1990s. One significant milestone was the transition from ARPANET to the National Science Foundation Network (NSFNET) in the mid-1980s, which continued to enforce similar restrictions on commercial use. However, by the late 1980s and early 1990s, the growth of the internet and increasing demand for broader access and commercial services led to policy changes.

The key changes occurred in:

1991: The National Science Foundation (NSF) lifted the restrictions on commercial use of the NSFNET. This was largely due to the recognition that commercial entities could benefit from internet access and that their participation could spur further development and innovation. This decision was encapsulated in a revised AUP that allowed for limited commercial use.

1995: The full privatization of the internet occurred when the NSFNET backbone was decommissioned, and the network's infrastructure was handed over to commercial Internet Service Providers (ISPs). This effectively marked the end of government restrictions on commercial use of the internet, leading to the rapid expansion of commercial online services, the birth of the World Wide Web, and the internet boom of the 1990s.

These changes were driven by the realization that the potential of the internet extended far beyond academic and research applications, and that commercial involvement was essential for its growth and sustainability. The commercialization of the internet has since had profound impacts on global communication, commerce, and society.

Rick Adams and UUNET

Rick Adams, leveraging his experience at the Seismological Research Labs where he managed the Usenet hub (seismo) and monitored nuclear tests for the government, founded UUNET Technologies in 1987. This company was crucial in transforming the internet from a government and academic tool into a commercial resource. As one of the first commercial Internet Service Providers (ISPs), UUNET offered dial-up and other connectivity services, extending internet access to businesses and individuals. By developing one of the first commercial internet backbones and essential infrastructure, UUNET significantly boosted the internet's growth as a commercial resource. Initially focused on Usenet and email services via UUCP, UUNET expanded to high-speed connections, supporting the burgeoning internet industry. The company's acquisition by MFS Communications in 1995, followed by a merger with WorldCom, underscored the growing value of commercial internet services and highlighted Adams' significant role in transforming the internet into a global commercial platform.

https://en.wikipedia.org/wiki/Rick_Adams_(Internet_pioneer)

Suck.com Net.Moguls Internet Mogul Trading Cards:

https://web.archive.org/web/20181211075708/http://www.suck.c...

Rick Adams, Front:

https://web.archive.org/web/20180802115113im_/http://www.suc...

Rick Adams, Back:

https://web.archive.org/web/20180802143444im_/http://www.suc...


Can we not use ChatGPT to write garbage HN comments please


What do you mean by garbage? Is it incorrect in any way? As I stated, I used ChatGPT because I tried and failed to find the original sources. I checked all the information myself, and searched google groups and the wayback machine for the original sources, and linked to what I could find (like the suck.com Net.Mogul cards), but I don't have all the usenet and email archives any more.

Can you please constructively suggest some better citations that have the same information instead of just complaining? Or should I just leave out all that relevant information, even though I checked ChatGPT's results against what I know from being there at the time? Or would you rather I just not disclose that I used ChatGPT to write the comment, so as not to trigger you?

Edit: Gumby: then that's what he should have said. But I'm responding to what he did actually say. Were all the facts correct, according to your memory? And can you cite a better source with the same information? I'd be grateful and glad to rewrite the comment to point to that, then.


Don, I’m guessing the complaint is really that sure, your comment is a piece of Usenet history, but not really germane to the post.


Posting AI generated info-dumps is considered bad netiquette.


We’ll see for how long I guess


So was distributing commercial software over the Internet, which was my point. And I got flamed for that, too. ;)

I at least tried to find original sources first, included the ones I did find, fact checked and edited ChatGPT's output, and disclosed its use. What more do you want, that information to be lost, or people not to disclose ChatGPT's use to avoid being flamed for violating netiquette? Is it prohibited by HN guidelines? Should it be?

Is posting working code partially written by ChatGPT that solves a problem and has been checked and tested also a violation of netiquette? Where is the line? Or does it move over time?

Sounds like an interesting topic for an HN discussion to me...


Sounds like an interesting topic for an HN discussion to me...

It's a mostly settled topic for now, you can find many moderation comments about it - don't use generated content in comments. The problem is that the overwhelming majority of LLM generated comments are not checked for accuracy, they're just random summaries of articles or unverified pastes of whatever the model spat out in response to a prompt. Even though you're trying to do this 'right' - checking and/or editing the output, disclosing the LLM use for the relevant part of the comment, etc - it's too difficult for anyone to tell the difference.


uucp and the internet were not the same things - but both hosted usenet and mail.

The big issue with commercialising the internet was really that originally internet connected companies were prohibited from routing packets to 3rd parties, what really changed things was the change of that policy - after that anyone could join


Of course uucp and the internet are different, but seismo, later uunet, run by the same person, were central hubs where they connected, and ftp.uu.net was one of the main software distribution points. Before seismo became the most popular, it was Bell Labs ihnp4 (Indian Hill), decvax, ucbvax, mcvax in Europe, etc.

That's why email addresses in signatures and mailing lists were usually expressed relative to seismo like "...!seismo!foo!bar!baz!user" (pure uucp routing), or "foo!bar!baz!user@seismo" (mixed internet/usenet routing), or later "user%foo%bar%baz@uunet.uu.net" (uunet gateway with fully qualified domain name), because everybody knew how to get to seismo (later uunet).


only if you happened to live in that geographical area (and seismo was a close routing) - I still have business cards with a ucbvax relative address, never used seismo


> Why do you think Linux ends up being the most widely deployed Unix? It's avoided niches, it's avoided inbreeding, and not being too directed means that it doesn't get the problems you see with unbalanced systems.[1]

[1]: https://news.ycombinator.com/item?id=40404440




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: