Its a bit interesting, and shocking, some of the members of the teams behind proto-HTTP software in the 90s were college graduates in the early 60s. If I'm doing my math correctly Yen would be in his 80's today.
The great mistake I see here: everyone worth discussing is an organization.
The best corners of the internet are not groups of people: they are collections of content. The content cannot pay you a license fee. The content cannot demand itself be constrained to non-profit ends.
in 01993 only organizations had internet connections
i mean in some places you could get a dialup slip connection from something like netcom, but you would put the files you wanted to share on netcom's ftp server, not your own
Even if that isn't 100% ubiquitously true, it's true enough to provide important context, so thanks for that.
My overall point doesn't just apply to this instance, though. It's something I see all over the place, even today; particularly in conversations about moderation and censorship.
Content has been siloed off so intensely that it's hard to even imagine a modern internet without arbitrary borders. Most of those borders are made across organizational lines. They are often made out of copyright, with the notion that some deserving party will be monetarily compensated.
Those borders usually don't align to the content itself. Instead, they become arbitrary hurdles, or even walls; making it unfeasible or impossible to truly benefit from content. Nearly every incompatibility in software was created intentionally, to cement and enforce these borders.
Now inference models (overconfidently called AI) like LLMs are all the rage. What do they do? They draw new borders. What are those borders meant to align with? The patterns that are already present in the content itself.
you're right, what i said was actually false, because there were a number of individual people who had their own internet connections. i've met some of them since then. but we're talking about maybe a thousand people out of the millions on the internet. i didn't know any of them in 01993. literally everyone i knew on the internet got their internet access by belonging to an organization that had internet access
A few people had t1 connections to their home, something i was jealous of, but no way I could afford the cost. By the time I graduated ISDN was available and my company paid for it so even affordable. DSL came soon after and was affordable to 'normal' people.
I agree with him. It was huge news when the licensing model was announced. Many people had expected it to be freely available. The day it was announced was the day that Web growth truly began. The rats didnt just jump from the ship, they created new ships. I feel bad for Bob Alberti in particular.
I remember using Mosaic for the first time and thinking that it sucked ass in comparison to gopher - so much less information available, and it was very hard to just browse the hierarchy to see what was on a server.
On the other hand, I kind of miss Mosaic's ability to easily turn off image loading. There's more than a few sites that'd be improved by using that feature.
UBlock Origin in advanced mode allows easy blocking of images, both globally and for specific sites. However, in Firefox this only works for the normal view; in Reader View and in the Page Info dialog box it will be ignored.
Everyone forgets the "directory" era. There was a time when Yahoo was the primary way to find things on the web. Getting your new website listed there was like winning the SEO wars.
I don't remember the timeframes exactly but at one point, I had a local "home page" on my Unix workstation that was basically a graphic and the links I was most interested in. Yahoo was probably there. Search engines were coming in but I mostly bounced around until Google came along. AltaVista was early on.
Back in the day, there were hosts on the internet that let you browse their entire filesystem via ftp. This was in the days before shadow password files were a thing, too. I'm too upstanding of a citizen to have done so, but a friend of a friend once spent a couple weeks of computer time running crack on those things and managed to gain shell access to some of the machines.
If you had such a ftp service, there was a good chance that eventually you'd end up unwittingly serving porn out of a twisted little maze of nested directories with embedded special characters an the like.
This killed a fair few useful-but-not-important sites.
At one point a well-known FTP server would let you access it with Samba, complete with R/W access to certain directories. I had an Amiga with a small hard drive, a modem, and the "VMM" virtual memory program. Experiments led to me creating a 2GB sparse file on the FTP server, mounting that server as a volume, and pointing VMM at that sparse file. Voila! An Amiga with 2GB of RAM, so long as you didn't mind swapping at about 5KB per second.
That was completely useless, but it was great fun to get working at all. I hope my sparse file was actually sparse on the server, too.
Huh, what did the Amiga's memory model look like? Could you construct a pointer to 2 GB different locations? Did it have segments like the 8086 or something?
It had a 32-bit address bus, so that was a nice, flat 2GB of directly addressable locations.
Edit: You might've been asking a different question. Toward the end, lots of Amigas had MMUs, either as a separate chip or built in to the CPU. VMM and similar programs used the MMU to implement paging.
Those are both interesting answers, and I didn't really know anything about the Amiga's architecture (other than to have imagined wrongly that it might have had 16-bit addresses). Thanks.
Not just porn, but also warez (pirated software). In fact, I'd say warez was much more common than porn, though that might be because of observer bias...
And the Unix variants people use (the Open Source BSDs, most Linux distros) are indeed legally not Unix, just imitations. And they helped kill Officially Licensed Unix mostly dead. Not just through price competition, but through adaptability and stability you don't get when the software is a Product owned by a Company with Executives who get Big Ideas and Grand Synergies. Usually, the most the official maintainers can do is tell you that you're on your own with your weirdo patches they'll never merge, but that's a lot better than a proprietary company saying it will never happen in this life or the next. Ownership can be death, as people don't like living under a sword of Damocles where someone else can unilaterally end what they're doing.
That's kind of exactly what I'm getting at. Officially licensed Unix is all but dead. It was outcompeted by freer (choose your own capitalization, either works) POSIX operating systems.
The plain-language explanations in the book[1] are still awesome, though. The exposition is fairly faithful to reality and doesn’t really omit any details. A style from a different time that doesn’t seem to have survived in today’s popular writing—which has its own good traits, but not these.
I was curious about Yen and saw a small bio about him:
https://chinacenter.umn.edu/umn-china/history/alumni/disting...
Its a bit interesting, and shocking, some of the members of the teams behind proto-HTTP software in the 90s were college graduates in the early 60s. If I'm doing my math correctly Yen would be in his 80's today.