I wish we had a protocol for articles and text content that couldn't be turned into an application layer with JavaScript. JavaScript took the web in a weird direction.
Semantic articles, comments, upvoting, media - distributed p2p - could disrupt Google, Adtech, and increase signal to noise ratio dramatically.
Imagine Napster / Bittorrent for news. Shared p2p. That would be amazing. There's no HTML to enforce presentation rules. Your client can represent the content any way you want.
I love the web, but I hate what Google, Facebook, Reddit et al have turned it into.
The signal is in the text and media content. Not the HTML/JS shell it comes wrapped in.
I think p2p is the right model to avoid the walled garden boondoggle. People can bootstrap it by pirating New York Times, etc. content at first, but then adding micro transactions later. Or maybe it should be completely free from monetization attempts.
Comments and upvotes could flow like email, completely distributed. They'd be cryptographically signed to prevent spoofing, and you could curate your own peer group.
Google, Facebook, and Reddit are shaped by the constraints of the media they inhabit and have created. In particular, the massive reliance on adtech is a direct result on the web not having a functional monetization mechanism. In the beginning, e-commerce was frighteningly difficult if not outright impossible and people would pay heaps for ad space. It was far less difficult to just slap ads on things; and there were several waves of failed attempts at making a microtransaction/subscription model work for web content.
Furthermore, even if a "paying for shit" business model (e.g. microtransactions or subscriptions) had taken off, that wouldn't have solved the problem. Adtech exists because advertising fraud is a rampant problem that constantly reduces the value of the advertising over time. The ad network needs something to keep you from making fake clicks to get paid; and all that invasive tracking also makes the advertising far more effective and lucrative. Likewise, paid content has similar problems with piracy. It's not enough to just ask people to buy something - you also need DRM, watermarking, and content ID to keep people from giving it to someone else without you making your buck.
The problem with the web isn't the amount of control it affords developers, it's that getting people to create interesting content for your medium requires business models that only work if the users of your medium are in some way restricted or tracked. This is not unique to the web; had the web not gained a scripting mechanism or a way to track state (i.e. cookies), then it would have remained niche like Gopher and instead some other protocol or medium would have taken it's place. Likewise, inventing a new medium with more restrictions will probably not take off; after all, people already have the web.
"The problem with the web isn't the amount of control it affords developers, it's that getting people to create interesting content for your medium requires business models that only work if the users of your medium are in some way restricted or tracked"
I think you have this backwards - I recall the web consisting of intresting content before adtech arrived. In my experience, Adtech has a damaging effect on interesting content, instead generating content that only appeases algorithms. Finding the quality content has become a lot harder.
When comparing my ideas for stealth [1] with this, I found that a lot of people assume that a local browsing data cache makes them vulnerable.
But, I think that a browsing cache makes peer clients stronger, because they do not have to make so many (potentially trackable) requests to a web server anymore.
Any request shared is another tracking prevented.
When it comes to signalling from HTML content, I think the web is kind of broken. Sure, we have RSS (it was great!) and we have opensearchdescription, and dublin core as a metadata initiative.
But what I think is missing is a way to extract the real content from the website. Imagine a real working and defined "reader mode" that's integrated as a standard, including all the metadata like topics, keywords etc. It would save so much computation time when thinking about all the SEO fraud that's happening these days.
Of course, incentives are always clicks, traffic, and tracking. Therefore nobody will implement it until forced to by Google or Facebook.
I decided to focus primarily on automation of extraction, because I think it is a necessary step to solve in order to reach a higher plane of knowledge automation.
I wish we had a protocol for articles and text content that couldn't be turned into an application layer with JavaScript. JavaScript took the web in a weird direction.
Going "HTML 2.0 only" would be a good place to start.
I've also longed for simpler world wide web for the purpose of distributing information. Hyperlinks, images, and structured text go a long way for things that do matter.
I can't remember the times I've spent evenings and nights reading a huge website, with dozens if not hundreds of long pages of information, and all laid in this 90's hypertext style format that was heavily interlinked so you could jump around and find more and more to read all the time until you fell asleep. Today, Wikipedia comes to mind (and I'm grateful they're still like that), but back then a lot of dedicated resources of information were maintained like that. And with minimal style, with no animated bells and whistles.
> Comments and upvotes could flow like email, completely distributed. They'd be cryptographically signed to prevent spoofing, and you could curate your own peer group.
> The web doesn't have to be the final protocol.
All of those with the distributed nature of bittorrent, and automatically updated content, is available with hypercore: basically a site is identified is identified by a public key, all modifications are stored in an append-only log and distributed among all those who subscribe to the particular site, p2p-style. Check out beaker (https://beakerbrowser.com/) to see what is possible.
The techniques all exist or have existed. It's gaining traction that is the problem. In the meantime, Firefox reader view (don't know how it's called) helps me sometimes.
I think that's precisely what Gemini is enabling here. If you want more formatting for your article, your Gemini site can just link to a TeX or PDF or whatever you like; Gemini can download files, they're just expected to display in a different application.
I like your ideas around Bittorrent and would love to see a system combining podcast-style RSS feeds and Bittorrent to enable decentralized video subscriptions. If we can get those RSS feeds onto some easy to find IPFS location, then we're set to replace Youtube as long as the famed Network Effect can be harnessed.
> Imagine Napster / Bittorrent for news. Shared p2p. That would be amazing. There's no HTML to enforce presentation rules. Your client can represent the content any way you want.
Ultimately just re-inventing Usenet here; might make sense to just head back into the land of Usenet and start actually using the big network of providers that are already set up well to defend themselves from DMCA claims and other takedowns.
> The signal is in the text and media content. Not the HTML/JS shell it comes wrapped in.
Time was we had a healthy competition in desktop clients for Usenet, mail, etc. that would use common protocols but differentiate themselves on usability and feature set.
We see the shadow of that with website-specific clients (I use Hackers and Apollo on iOS, for example) that provide more functionality or a better look-n-feel on different form factors - but they're specific to a website, and while Reddit allows community creation, it's not the same as something actually decentralized, as is evidenced by their insane level of censorship lately (i.e. banning entire communities that aren't breaking the law just because they disagree with their political opinions).
> Or maybe it should be completely free from monetization attempts.
Eh. I'd like to see a standardized section of document metadata that is able to include links to donations; make it super easy for me to click a button and donate 25 cents worth of BTC to someone and I will click it all the time.
> Comments and upvotes could flow like email, completely distributed. They'd be cryptographically signed to prevent spoofing, and you could curate your own peer group.
I'd really like to see an easy way of querying Usenet for comment threads about a topic, and then on any page I am visiting, I should be able to see comments by group, so as to pick the "comment section" that I want rather than just one shared one that isn't to anyone's liking. Gab's Dissenter provided an early vision of this: a meta-commentary mechanism not linked to the website in question.
Plenty of other things (NZB indexers) have built stuff on top of Usenet using it merely as a distributed, sharded datastore. There's precedent here.
That's a fair point, but really what you're saying to me is "the ideal microtransaction monetization system hasn't been invented or popularized yet", which should be heard as an opportunity, not as a deficiency.
Something I love about the "small net" (Gemini, Gopher, etc) is how easily it maps to terminal usage, as opposed to something like using lynx on the modern Web, for example.
Bombadillo supports more than just Gemini, but I was interested in Gemini exclusively and created my own Gemini browser inspired by Bombadillo.
Both bombadillo and amfora are great! I love bombadillo's cross-protocol nature (lots of small-internet sites mix WWW, Gemini, and Gopher links) and how it aligns links (indices in the margins), but I also love Amfora's syntax highlighting. I preview my own Gemini capsule in both before publishing.
I'm not sure if we need non web protocols.
A while ago I was experimenting in text mode browsers simple webpages. I was amazed by instant page loads, really fast development (each technology you remove just makes everything faster, remove css and stop caring about style, remove javascript and think of simpler ways of doing things).
The web is great, can be acessible to all humans and machines and it uses syntax you already know so no learning curve.
I really believe that there's some unexplored world here in text mode web. Gopher telnet are nice to play with but I see the text only web having a lot more practical uses.
Inspired by Bombadillo, I created ncgopher (https://github.com/jansc/ncgopher) which supports both gemini and gopher. It is ncurses-based, and I tried to give it a Borland Turbo Vision-like UI with menus and dialogs. Works on most systems and has packages for Arch and NixOS.
There's lots of Gemini content! It's just not on those pages. Check out GUS[1], CAPCOM[2], and Spacewalk[3]. GUS is a search engine, and the other two are self-hostable aggregators.
I do t quite understand it. I love that it is a gopher browser, but why is it better to, say, run telnet within it instead of simply at a terminal? Or ftp?
I could imagine it being a nice DNS explorer.
This is not a criticism! Or perhaps it is, but of my failure of imagination.
> The name Bombadillo comes fromt he legendarium of [J.R.R. Tolkien], specifically The Lord of the Rings. [Tom Bombadil], who was a jolly fellow, is a mysterious figure. A seemingly simple character that speaks in rhymed meter and lives in the woods, Tom is master of his domain and is in his way quite powerful.
He is truly the master of his domain: the ring, Gandalf’s spells, etc have no his realm. I always felt he was a survivor or holdover from an earlier age, and akin to a god.
He’s definitely more powerful than a Maia (as the ring has no effect.). My headcanon is he’s either Eru himself or an insert of Tolkien, hence why he has no wish to stop Sauron (because he wants the story to unfold.)
I wish we had a protocol for articles and text content that couldn't be turned into an application layer with JavaScript. JavaScript took the web in a weird direction.
Semantic articles, comments, upvoting, media - distributed p2p - could disrupt Google, Adtech, and increase signal to noise ratio dramatically.
Imagine Napster / Bittorrent for news. Shared p2p. That would be amazing. There's no HTML to enforce presentation rules. Your client can represent the content any way you want.
I love the web, but I hate what Google, Facebook, Reddit et al have turned it into.
The signal is in the text and media content. Not the HTML/JS shell it comes wrapped in.
I think p2p is the right model to avoid the walled garden boondoggle. People can bootstrap it by pirating New York Times, etc. content at first, but then adding micro transactions later. Or maybe it should be completely free from monetization attempts.
Comments and upvotes could flow like email, completely distributed. They'd be cryptographically signed to prevent spoofing, and you could curate your own peer group.
The web doesn't have to be the final protocol.