While this is somewhat cool, I have a few comments:
> The Web is, without a doubt, the most powerful research tool currently available to man. No longer must researchers comb through endless indices and catalogues to find what they are looking for.
True, but most people aren't researchers. Heck, I think most people don't even know what indices are :)
> The vast majority of those interested in a piece of work are merely readers, unable to contribute, only to consume.
Guess what, most people, 99% of the time, "consume".
> Billions across the globe rely on the Web to enhance their intellectual capabilities on a daily basis, building understanding through its rich mesh of connections.
Not really, billions across the globe check out funny cat pics, play games, watch you-know-what, etc. :)
Anyway, what I'm saying is: it's a nice vision of the world and the web, but that's not what the world mostly is. Good luck with it, but don't expect that a super contributor-friendly media will turn the vast majority of people into constant contributors.
The majority of people may not be professional researchers, but the Web certainly has encouraged amateur research, even on seemingly frivolous topics. There are countless wikis detailing every single character and place in movies, cartoons, and video games. Before the Web, this would have required physical conferences, actual paper publications, and so on. Some of this of course existed (think SF conventions and fanzines) but on a much smaller scale.
> Yet, it's amazing how much information is freely available right now, given all the barriers that exist to host content.
While there are barriers, are you sure that's true? There's a million options to host content these days. It's much easier than 10 years and much, much easier than 20 years ago.
You can get a Wordpress blog or a Medium one with 2 clicks. You can make a Facebook account very easily. They might not be perfect, but the options are there.
A Facebook account isn't even a useful platform for the aforementioned funny cat pics, let alone more important and more complex content.
Hosted blogs can be better in theory, but in practice only an expert can use them for complex applications: for example, indexing a collection of cute kitten photos by tags and multiple criteria isn't the same as posting kittens without metadata and offering readers only titles and generic post listings.
Well, "2 clicks" is a bit of hyperbole, but in the spirit of your comment, yes, things improved even over that. The free hosting schemes available today are much more powerful. Just think about what you can do with a Google account. Docs, Drive, Blogger, Sites, Forms, Picasa & on & on & on
Hey, most people just use Windows as the only OS and we still have Linux. Making the world a better place by forcing a majority of people being smart is much more hard and subtle than creating intuitive and expressive learning materials and IMO it should be left for economists.
I largely agree. I'm very much coming at this from the angle of 'knowledge work', and think a system of this kind is most useful to (though definitely not only useful to!) scientists, engineers, designers, lawyers, journalists, etc. While the population of knowledge workers is admittedly much smaller than the population on the whole, though, it's still sizable. Knowledge workers play an incredibly important role in our society, and anything that can amplify their intellectual capabilities is well worthwhile in my view.
If the target audience is knowledge workers then this is a nice step in a better direction. Bear in mind, though, that the goal of a knowledge worker is to understand content so the focus of any such project probably shouldn't be on enhancing the manifestation of graph theory on the web and rather on the methods of education available.
Focus on how the content is presented to the user rather than the connections between content, because at the end of the day it's the content I care about and not the connections.
What? You need to survey hundreds of papers that are relevant to your research. Nobody has the time, nor is it expected or in any way efficient, to read even a majority of these in-depth.
I read notes, bibliography, and index first. In that order. The more hypertext-like a work, the better, though it should be sufficiently self-contained that the references are supplementary.
>> Billions across the globe rely on the Web to enhance their intellectual capabilities on a daily basis, building understanding through its rich mesh of connections.
> Not really, billions across the globe check out funny cat pics, play games, watch you-know-what, etc. :)
This!
Many resources seem to be distributed asymmetrically, so is being interested in intellectual development, and only a tiny percentage follows the ideals of enlightenment and intellectual development.
So yes, even though we have the biggest resource of instantly available information under our fingertips, many seem not to use it at all for education and development.
I would love a web where you can comment with your forum circle on any URI available on internet.
Eg. I open some research paper and I click "comments" in web browsers. And I see comments from /r/machinelearning, hackernews etc. Also real time chat would be awesome to have for each of websites.
Nowadays it work for me like this - found something interesting X - type "X site:myforum" in google to learn more about it
Found a bug, typo, wanna contribute related resource - need to go to github, email author etc. - can't just open "comment" on page without comments and contribute :(
I think such structure of internet as we have right now relies on google/search engines too much when it could be much better organised.
This utopian alternative is probably the internet the search engines fear, so the current consensus that this would make things "too messy" is presumably what the search engines want you to think.
But wouldn't it be the embodiment of the original dream of a decentralized, democratized web? Where loose collectives of people can self-organize their content, discussion and publishing around shared topics of interest, with minimal outside interference?
If I were a big search incumbent, I'd buy the patent/ startup to this, and put it on a shelf someplace no one would ever see. And I'd keep doing this everytime it was independently developed, until I had such a strangle hold on the global internet / the regulation was so locked down, that this sort of upstart decentralized "siloless / nomadic / free ranging" discussion / open hypermedia system could never come to be.
Thank god we don't live in that kind of a world, where just a few internet companies control and determine the majority of the world's interactions with the internet. Oh, wait...
Thinking about it...Somehow I don't think it was the browser's fault. The problem, I think, was that governments were too weak / slow / isolationist to ensure the internet be preserved as a true public resource / global commons. Sad. But I'm sure the story is not over. Not yet.
I'm not seeing anything preventing the web from continuing to work like you describe. What is stopping these loose collectives from publishing and discussing among themselves without outside interference? You imply that governmental neglect and bigcorp greed are to blame, but I'm not understanding how you connect those two things.
Self-publishing on the Internet is cheaper and easier than it ever was in the past. So why do people flock to Reddit, Facebook, etc? Because they are even easier and cheaper, are easier to find, and have a built in audience. But none of those facts relate to governmental regulation or lack thereof, that I can see.
The problem with self publishing is that you're practically invisible to the rest of the world and you'd have to join the big players platforms to get some visibility...
That could erase forum-ness of people and turn comments into a big heap of voices. Personally I wouldn’t like my forums if they were just a random people hanging around. Moderators and implicit rules do form what a forum is, what you do on it, but with “just comments on URI” all will be lost. While your wish feels great, it has downsides, since as an entire humanity we’re still not ready to discuss anything.
Just seems like a downgrade to posting it on said forum.
For example, now you need a UI that enumerates the articles that have comments on them. Maybe new comments even bump to the top. Starts to sound like a regular forum but with a small gimmick to me. There’s a reason this never caught on.
You can configure your client to ignore comments unless they’re verified by [authority you trust]. Verification would happen cryptographically; like the GPG web of trust. It’s up to you to choose which moderators you subsrcibe to, if any.
Could maybe try some machine learning for moderation. I'd imagine the non-contributors share some amount of the same communication patterns.
Every time I see a problem that requires a lot of repetitive work that most people would not like to spend a lot of time doing, I always try to think of how it could be automated with computers that don't get tired
There's several browser plugins that work that way, I think the main obstacle is you need everyone on it. also that theres not a lot of activity spread across the whole web, resulting in stale discussions.
Google reader I think had this social thing built into it. It wasn't just an RSS reader.
> I would love a web where you can comment with your forum circle on any URI available on internet.
This would be automatically available if hyperlinks were two way connections.
All that would be needed is a culture of linking to what you commented about.
Which, I would guess, would have happened in such a web, because it would be a no brainer that any cool tool provides that. Much like the share buttons for SN we have now on every website.
I've been building an app that (among other things) can kind of generate this for Twitter--if people are discussing a link, it finds tweets about that link, quote-tweets of tweets about that link, and replies to tweets about that link (in addition to tracking the number of retweets and likes). (The app is pre-release but it's on social media as https://twitter.com/flockpath)
I've been thinking that I should 'break out' that feature as a Chrome extension. It would open the Twitter discussion for the link you're on, as a sidebar. Here's an image of how the sidebar would look (this is a sample of tweets about a particular article called 'Podcasting's Next Frontier'): https://imgur.com/a/GEcR7b1
And if that Chrome extension becomes popular maybe I would ditch the dependence on Twitter and also spider the rest of the web for discussions about the URL, especially big forums like HN, Reddit. I think https://techmeme.com does something like that.
As an aside, this is the kinda thing I miss about the circa 2005 web--when there was a prominent article published it would get discussed in a variety of forums, blogs, etc. Now the discussion has become centralized to a few aggregators and social media sites.
My app is actually meant to be like a 'newsfeed for tweets' that shows the top 10 tweets of the day engaged by people you follow, so it vacuums up a bunch of tweets (from tweets engaged--liked, retweeted, etc--by friends of my users) so then it just internally counts which have the most retweets/likes. So what my app thinks is the most retweeted thing might just be reflective of my internal data set rather than what's the actual most-retweeted thing.
You're going to get a lot of dismissal here because the idea is old and has been tried many times. I wouldn't let that stop you, annotation is very useful. The thing I am skeptical of however is that it makes the most sense 'as a forum'. I don't think it makes a very good forum, it does make an excellent collaborative research tool. More sane would be to have these annotations and import them into a real forum tool.
Sometimes content creators don't want to see comments from the users. See YouTube for example. Some creators prefer to close the comments section even if it's available.
If I'm not mistaken Volunia search engine had this very features, it lets you join community and start discussing with other users on the same URL you're at the moment.
I remember there used to be a Firefox extension that added comments/wiki per url. The problem is lack of popularity.
Something that integrates the HN/Reddit thread about a url to the side might work and might not need the mass of users since it's using the other platforms for the comments.
there have been dozen of solutions like this since the 90s that came and went. specially during the "browser toolbar era". and there migth be new one poping up now.
in the end it is a privacy nightmare. see stylish extension privacy problems last months.
Yup. Circa 2000 I remember talking with one of those entrepreneurs about joining. He was so sure that being able to comment on everything was a the next step forward in the web. At the time I found the idea both intriguing and suspicious, but the guy, a former surgeon, was such an obvious pain in the ass to work with that it ended for me there.
In retrospect, I recognize it as part of a common genre: grand ideas that are necessary only in theory, not in practice. It's easy enough to add commenting to any web page if the owner wants. It's easy enough to discuss any web page elsewhere, like here. And the grand idea treats good discussion as equivalent to global randos posting comments, which is demonstrably false. Real discussions are gardens that require careful tending.
My first defense now against grand ideas is, "That sounds cool, but who needs it enough to pay regularly for it? And why would this be better for that person than whatever they're doing now?" It turns out that many grand ideas only make sense from a 40,000-foot perspective; if you look at it from the point of view of somebody on the ground, it's obviously just a fuzzy cloud.
This looked interesting, so I started reading the author's dissertation, but I've been sidetracked/put-off by its overbearing copyright statement.
This copy of the dissertation has been supplied on
condition that anyone who consults it is understood to
recognise that its copyright rests with its author and
that no quotation from the dissertation and no
information derived from it may be published without
the prior written consent of the author.
At least in the US, the fair use clause under the copyright law allows for limited quotes/excerpts w/o asking permission.
Sorry about this! It was a stock copyright clause provided by my university. I’ll look into getting it removed, and in the mean time am perfectly happy for you to use the work (with proper attribution) as you see fit.
EDIT: The problematic clause in question has now been removed.
We almost got there with Pingbacks being the first step. Then they devolved into meaningless spam. Without a system of manual curation, it's impossible to build something where _everyone_ contributes. Spam scales easily, and moderation and curation does not. A thousand good links buried under a million spam links don't add any value.
And we can talk about reputation and proof of stake systems until we're blue in the face, but so far, nothing exists that actually works. If it did, we'd already be using it.
My gut sense is that proof-of-work as a spam deterrent can _improve_ the signal-to-noise ratio, but won't bring it up to an acceptable level.
Say you have a simple proof-of-work protecting some action (posting a comment, voting up/down a story, etc.), and you have its difficulty tuned to allow a median-productivity human on a typical desktop computer to do that action at their typical rate.
A spambot doesn't need to sleep, take days off, and can get illicit access to much more computing power than any one human.
It _would_ probably shift spamming activities to focus on more central, high-value venues, though. hmm.
Not everyone has CPU cycles to waste, like users on mobile. Even then, the cost of a consumer contributing one connection is an amount of work that's totally tractable by a spammer.
Unless you get into tokens and paying money to contribute. Which defeats the purpose. Why would I pay money to contribute to a decentralized service?
There was a very compelling vision for the web, by Doug Engelbart and others in the 60's and 70's. Unfortunately, because of the computing culture's attitude of forgetting even the recent history / understanding what foundational work was done (like real scientific fields do!), the web folks didn't have a lot that context.
Alan Kay, in many of his talks, has discussed how the browser should really be more like an operating system kernel. The web is a mess and we can still build interesting things with lots of hacking & engineering, but it's fallen short of the original vision. And now we're locked into the tooling we've built.
WebAssembly will no doubt give us more freedom, but still has a lot of constraints. Also, it's fundamentally a hack built on top of the browser ecosystem, not replacing browsers entirely!
I seemed kind of cool but doesn't do much you can't do with webpages and javascript. Its scripting language seemed much easier to do some things with than javascript though. I kind of imagine WebAssembly will go the other way and make things more complicated.
> I seemed kind of cool but doesn't do much you can't do with webpages and javascript. Its scripting language seemed much easier to do some things with than javascript though.
In our current developer culture we tend to view things less holistically and more in terms of this-or-that language or system. Hypercard (which was composed of both the visual system and Hypertalk) was one holistic system that fit extremely well with personal computing of the early to mid 90s (which is why recreations of Hypercard on today's machine miss the point).
This was a completely different view of personal computing, one that sought to reduce the chasm between "developer/programmer" and "user". Hypercard allowed authoring in the computing medium, and did so by permitting users take advantage of what was new about computing as opposed to other older media. In fact, they were called "authors" and there were thousand of them — most of them weren't professional programmers.
There is a lot that hypercard could do that the modern web cannot. I cannot copy and paste a button — retaining all of its internal functionality in a different context — from one web page to my own (at least not without a lot of trouble). This was the de facto way to get started in Hypercard.
Hypertalk is another interesting part of bridging the divide. It is difficult for entrenched programmers to reckon with because it's more like natural English and (unlike most programming languages) is easier to read than it is to write. But for regular people it makes sense.
Final point: everything good about computing is about metaphors. Hypercard had one of the best metaphors since spreadsheets: the concept of stacks, cards, and objects on cards. That's all there was and it was easy to understand how these pieces interact. The web does not have anything like this for its "authors" because it is inherently unfriendly to them.
Most saddening, perhaps, is the way in which the Web constrains the use of links. For example: although the link is the primary form of reference on the Web, underpinning the tangle of connections that make the system so useful, the ability to create new links is a privilege granted only to content producers. The vast majority of those interested in a piece of work are merely readers, unable to contribute, only to consume.
The sad part is, we already have the technical infrastructure in place to support those user contributions - it's the Comments section of any blog-shaped site.
So called "Web 2.0" was all about readers contributing feedback to whichever content was being published through a channel. But the shape it took was not the original hypermedia vision, but a conversation of loosely related comments that could potentially go off-topic.
To support the annotation feature described in the article, it would just require that common web platforms allowed their current comment systems to attach comments to paragraphs in the article, and show these comments as side notes. Current moderation functions could be used to separate the wheat from the chaff. But it would require readers to adapt and learn to tap this resource to its fullest potential.
it would just require that common web platforms allowed their current comment systems to attach comments to paragraphs in the article, and show these comments as side notes
I think that's a somewhat view of what a user contribution could look like. Adding feedback is great, but imagine what the web could be if users could do more than just comment. I'd love to see a blog application that supported user contributions like fixing spelling/grammar, adding links, injecting additional paragraphs to explain complex topics, captioning pictures, etc. All those things could be suggested as comments that the author would manually use to improve their article, but I think it'd be better (faster at least) to do it automatically.
Idealistically I'm thinking of something that's the best parts of Medium and Wikipedia.
Mediawiki, the software behind Wikipedia, already has all those features, includeming the possibility of moderation by privileged users or automatic updates, on a per-page basis. You'd just need to use the software for blog content rather than encyclopedic content.
I'm asking you about the details that make them different, and which you consider relevant for a "web with user contributions" in the way you described.
The difference is that those features would be implemented differently in a blogging context. I think that they're things that would improve a collaboration on blog to move from simply getting feedback from users as comments for the author to use to actually collaborating on something to make it better. I haven't thought about what those implementation differences would be; it'd be a lot of work.
Right; we just need everybody, everywhere to adopt hypothes.is; instead of one of all the other competitors.
Adopting a single protocol by the masses is something that rarely works. It's much easier to gain critical mass if a server platform (like Wordpress) included an annotation module as part of its default modules, so that it appeared at many websites as soon as they update to the latest version. Then, it could catch on as a popular feature, and other platforms would start to build their own implementations, making it more visible and gaining traction all over the web.
(That Wordpress module could very well be the hypothes.is software, if they're compatible. But it really needs to be adopted server-side by a popular service to gain traction).
" it's the Comments section of any blog-shaped site."
Something I never read on any site. It's just pure racism/muppets talking shit. If there were a plugin to block them I'd use it; I feel dirty even knowing they're down there. L
I always read them on any site. I want to know even about racism and muppets talking shit, rather than be surprised when it appears in real life. And usually it's just real people, living real comments, from their own viewpoints.
No, I'm talking about actual Nazis, racists, climate change deniers, conspiracy theorist and other idiots. I'm more than familiar with their "typing" - I don't need to see it at the bottom of every story I read.
I feel sorry for you - the original early-'00 blogosphere was actually very good, with all sorts of people building connections and intelligent debate through comments. After a while, the conversation was so deep that comments ended up being too short, so they had to define Pingbacks so that people could post elsewhere while still connecting with the source material.
And then spammers and the political sphere co-opted the technology, and it all went downhill.
I think there is still space, for defensively-minded geeks, to create ways to communicate that can keep debate open while shutting down the trolls. It's clear we don't have such a thing at the moment. I fear, however, that well-intentioned researchers like OP will simply end up building new systems that will replicate the mistakes of somewhat-naive early pioneers.
I remember those blogs. Somewhere between Usenet and Facebook. I found the ping back things annoying. If I remember rightly it was four or five pages of a copy of the first line of the article plus a link to someone else's blog but with no reason given for why I'd want to roll my sleeves up and start clicking on them.
Pingbacks were a 0.1 implementation of a concept that never got a second chance, because then the walled gardens arrived and destroyed the ecosystem. The main problems they had were what you mention: each ping would contain only a few lines (which i think was a limit in the standard, to limit spamming) and the automation mechanism was a bit stupid (every reblogging ended up generating a superfluous ping). People who cared could fix the first issue (with a sort of "above the fold" summary), but the second was enabled by over-eager engines and ended up ruining it for everyone.
Great to see those ideas discussed! It is a little strange, however, that Ted Nelsons ideas around Xanadu and its link representations aren't even mentioned.
Didn't Google used to have a project that allowed readers to annotate the web?
If I understand it correctly, it sounds like the author wants something similar only categorised by field of expertise instead of being a free for all (and not owned by one company). This would require some kind of moderation, in one form or anther.
>One could imagine a system in which multiple sets of links could be associated with a single resource to accommodate this, allowing for a range of different viewpoints on how things are connected.
Although this talks about freeing the web from a browser, this seems like a pretty good case for a augmenting the browser experience. The first thing being a browser plugin that ignores all links (which is probably a good default for anyone interested in reading an article and not getting distracted), then it just allows a layer on top for highlighting sections creating your own links. I expect this probably already exists.
I think firefox's reading mode should have an option to turn off links.
The missing step here is connecting to other programs, but this is a first step.
Author here. It’s definitely true that extending existing browsers is the fastest route to this kind of behaviour (though it’s not clear to me if it’s the best route). In fact, if you squint a bit (or maybe a lot), it could be argued that with application-specific URL schemes we kind-of sort-of already have the primitives we need to make something like this work.
Practically, though, if you want the multi-program side of this (which is kind of orthogonal to the 'multiple perspectives on how things are connected' side), then to make this kind of multi-window hypermedia system usable I think you need to have deep integration with the window manager. While Chrome OS tries to achieve this kind of integration by making the browser the OS, I propose that the best way forward here is to effectively make the OS the browser, as I discuss in the article. (Of course I’m not talking about the kernel when I say ‘OS’ here, but the desktop environment). At that point, I’d say the browser is different enough to the browsers of today that the description of ‘freeing the Web from the browser’ is still accurate.
Thanks for the reply! It's a really interesting piece of research. Yes, you're quite right, the browser alone can't interact between all the programs, I was purely thinking of low hanging fruit for the browser plugin.
I watched a Google interview with Douglas Engelbart [0] where people asked him a couple of times if Wikipedia was what he'd envisioned for hypertext, he was very polite about it and said it "was a good start", but he'd clearly wished that we'd got much further by now.
What you're suggesting is definitely a step forward.
The multi-program facet of this reminds me of Plan9's [plumber](http://doc.cat-v.org/plan_9/4th_edition/papers/plumb) concept, which is an OS component allowing communication between programs in the form of "plumbing messages". The plumber processes plumbing messages according to a rules file, passing them on to other applications. This rules file allows the user to easily configure which piece of data gets delivered where in a uniform fashion.
Plan9's plumber was mostly text data oriented, but it could in principle work with any kind of resource (images, audio files, documents). It seems naturally extensible to URIs. It goes a bit deeper than your idea by saying URIs and browsers aren't special and that all kinds of data should be plumbed between all kinds of applications (for instance, a text editor detecting a DOI in a text file and converting it into a clickable link that sends a plumbing message to the browser, which would then open the corresponding doi.org page).
The problem I think is that most sites and businesses would be up in arms over this idea. FB wants to have links pointing to other FB pages. Many websites do a lot to prevent user from leaving. How do you plan to overcome this?
That's a good point, and not one that I've considered particularly deeply to be honest. (I'd love to hear other people's point of view on the topic!) I guess in many ways the situation is similar to that around adblock. Ultimately, the links that are overlaid on a particular page should be solely and completely under the control of the user. If the technology that everyone is using permits this kind of behaviour, I'm not sure companies have much choice in the matter.
> If the technology that everyone is using permits this kind of behaviour, I'm not sure companies have much choice in the matter.
The technology used to permit this; the current trends go against this direction. I'm of course referring to JS-rendered content and SPAs. I imagine, were your idea deployed, most of the time would be spent on fixing broken links and link anchor points.
I support the goal you're trying to achieve. But between greedy publishers and their ToS and JavaScript infecting everything like a pathogen, I fear that we'll have to spin up an alternative Internet for knowledge work. That Internet would be reader-friendly (both human and machine kind) and much more static.
Most documents are linked regarding the topic. If you go on a math page, you've got math links. Ancient and holy texts could be referenced in multiple ways, because it could be interpreted in multiple ways (or the correct way is unknown), but not something like guides. If a guide is written in more than one meaning, it is not a good guide. Learning should be more seen like a tree, you go down the route of branches and specialize more in the directions of it and the branches lead the references. At the beginning you learn the language, later you know it, otherwise every word needs to be linked.
The simplest counter-example is Wikipedia. Most links on any page could lead to generic term definitions, or they could link to explanations of how those terms work withing the context of the page.
I.e. a link to "synthesizers" on a page about FM synthesis could lead to a generic article on synthesizers, or to a list of FM synthesizers released up to date.
And that's just the most obvious example. Having different "linking contexts" would allow to add more links without turning the original document into a mess.
Again, using Wikipedia as an example, you could add another "context" to the page by linking various paragraphs to citations. That would be much more user-friendly than what they do right now with bracketed numbers.
Great article, thanks for sharing. Of relevance here is TiddlyWiki (www.tiddlywiki.com), a personal note taking tool, that includes some of the salient features, like transclusion and user-generated linking, mentioned in the article. In addition, the approach supports a built-in programming capability to allow computation on content (e.g., dynamic filtering and content generation) and extension via plugins. All information (content and Javascript code) is within a single html document.
Comments and feedback won't work for a site with a lot of readers or viewers, they don't scale. If just 20k people read your post, and one-quarter of them comment, you'll be flooded and not able to find anything meaningful. I'm already following some YT channels where the creator(s) has stated, "we can't read the comments, requests here are ignored."
YT comments simply suck in any area. Creators can’t make polls, sorting seems random, upvoting has no filters like funny/insightful at least. Moderation seems to not exist at all. No groups, no sections (those 4-5 in sidebar do not count). It is worst implementation of all possible, and YT does nothing to fix it for decades. They have time to make “material designs” though. If you imagine that videos are just OPs in a forum, you’ll see how crappy it is.
I had to stop reading after a few paragraphs. His assertion that linking belongs to "content producers" is ludicrous. Those content producers have given users the tools to do linking themselves, and they express themselves in a variety of ways over a variety of mediums.
You need to learn how to write to express yourself with written word, yet how many people do we here harping on how difficult it is to learn language?
At some point we can draw a line and say, "if you want these abilities you need to learn these things". We did so with literacy, with driving, and with so many professional trades. We can do so with basic internet literacy.
> Those content producers have given users the tools to do linking themselves, and they express themselves in a variety of ways over a variety of mediums.
? How can I link to, say, a quote in that article that offends you? I can't. How can I link to a youtube video and add my own commentary links? I can't (I think) without creating my own video that explicitly copies the original (rather than consuming it).
> You need to learn how to write to express yourself with written word, yet how many people do we here harping on how difficult it is to learn language?
I definitely wish people took the time to work on that skill instead of assuming it's both automatic and that their level is adequate. Nonetheless, this doesn't seem related to the point of the article - not that linking is HARD, but that, outside of whatever the creator enabled, all we get are top level URLs.
> How can I link to a youtube video and add my own commentary links? I can't (I think) without creating my own video that explicitly copies the original (rather than consuming it).
You can write a blog post and embed the video with timing information.
Of course embedding is largely the same as transclusion, among the features touched upon by open hypermedia.
While you don't get full expressivity without a blog or something that allows full HTML, you can get most of this in other mediums (e.g., Twitter) where the video is embedded automatically given a link. In theory OEmbed (https://oembed.com/) is a standard for something like transclusion, though it's not very widely supported.
Constructing a link to a point in time in a video is a non-standard operation (you just have to know the YouTube interface). Similarly there aren't great patterns for finding a link to a position in a web page. But the pieces are all kind of there, though missing the controls and patterns to bring them together. Which is a failing of browsers, though that points in the opposite direction of the claim in the title of this piece (i.e., it implies to me that we need browsers to go deeper, not increase the breadth of linked applications).
I believe what is meant by the statement is only content producers can add external links to content they produce.
I am unable to take any content and add additional annotation for others to utilize. There appear to be other projects that try to introduce this functionality though such as hypothes.is [0].
It looks like the author is proposing an overlay system that can be applied to a variety of content types. Users can then apply different overlays that are geared towards different topics / audiences. Basically a swappable reference section.
I don‘t get what the actual problem is and how todays technology is limiting anyone to create something like a link-sharing (e.g. reddit) or link-redirect (hello url shorteners). Of course, if you need control over the content, then you have to build a content management platform, too (centralised or decentralised does not matter). But then you deal with boring copyright and other legal stuff.
And hey, browsers do allow extensions nowadays. And if that‘s not enough, build your own.
There used to be a browser plugin that allowed users of the plugin to register comments on parts of web pages.
In general I like the idea behind the article, of enriching content by allowing readers to add links, but in practice this opens the door for spammers.
did i miss something or does this article COMPLETELY ignore the amazing possibility for abuse this opens up?
Now any a-hole can make a public link in your page (or whatever future form that takes)? Nah, no way _that_ could go wrong. The word "abuse" appears literally zero times in the 200 page pdf.
This is exactly where my mind went, the amount of spam
even small time bloggers have to contend with in their comment sections is astounding, now imagine if every page on the internet had a comment section and no filtering, or moderation.... Even today youtube comments suck because of the near total lack of moderation.
Even sites like this one or the verge with paid and volunteer moderators who in theory monitor things 24/7 I still see useless spam. I'm not even talking about individual people with opinions some might find offensive, I mean just outright spam advertising.
Abuse of this sort of functionality is definitely something I've thought about, but discussions around these sorts of ecosystem issues weren't my focus in this work. Ultimately, there are solutions. For starters, you almost certainly don't want every link or comment from any user to show up automatically for everyone the instant it is made. There's a fantastic comment by user 'enkiv2' over on lobste.rs about this (relating to design decisions around Xanadu) that also roughly reflects the kinds of assumptions that I've been making in my prototyping about how this might work in practice: https://lobste.rs/s/p0sgoj/freeing_web_from_browser#c_vivphl
"Different people have different perspectives on how information should be connected, so why do we not allow these range of perspectives to be represented and shared digitally? Why limit ourselves to just one point of view?
...
Why re-create code editors, simulators, spreadsheets, and more in the browser when we already have native programs much better suited to these tasks?"
The title is something I contemplated and began to address long ago, only on a personal level.
With respect to the first question, perhaps this goes to the poor mechanism promoted by Google, to rank the www's contents by "popularity".
This mechanism obviously succeeds for purposes of measuring www user opinion and selling advertising (the later not anticipated by the founders in the early years). However it falls short in the non-commercial context, e.g., the academic setting out of which the company grew. Anyone remember "Knol"?
Today Google search (and probably others seeking to emulate its commercial success) intentionally promote a pattern of usage of their cache/database where its users never reach "page 2" of search results. The company has built their ad sales business on the idea that one perspective ("the top search result") should not only prevail but also that, optimally, other results need not even be considered. It should be obvious that in a non-commercial research context, this is not optimal.
If the www is 100% commercial then of course this is not an issue. But "the www" is difficult to define. All httpd's on any accessible network? All httpd's listening on accessible addresses with corresponding ICANN-registered domainnames? All pages crawled by a commercial bot, deposited in a commercial www cache and made accessible to the public? And so on. In any event, if users only view the www's supposed contents through the lense of a commercial entity, the perception of what the www actually comprises may be manipulated in a way that suits commercial interests, e.g. the sale of advertising.
As to the second question, when given the choice I do not use a popular web browser. The author mentions the utility of "native programs". I would prefer the term "dedicated programs". Programs that perform essentially one task, or "do one thing". Whether such programs can perform their dedicated tasks better than an omnibus-styled program that performs many, varied tasks is a question for the user to decide. For example, the author answers that native programs are "better suited" than the web browser.
The "web browser" has become a conglomeration of once dedicated programs.
There are such dedicated programs for making TCP connections over which HTTP commands can be sent and www content retrieved. This is a task that web browsers can perform, although some users may prefer a dedicated program. In this way content retrieval can be separated from content consumption, alleviating many of the www annoyances such as user tracking, manipulation and advertising.
We do have a way to create links between two documents without editing either document: create a new document that links to both documents. This is a normal, though informal, activity.
And of course simply linking two documents together isn't that useful, you have to say WHY they are linked. I.e., the semantic triple (https://en.wikipedia.org/wiki/Semantic_triple) of subject–predicate–object, or maybe more informally you are simply saying X relates to Y because of Z, where Z is akin to the predicate.
Currently in HTML hypertext we're stuffing Z into the link text, which sometimes works nicely and sometimes works very poorly. But in an external document you have all the space you want to explain the relation between the documents.
Obviously there's lots of shortcomings of adding a new document to the web to explain every relation between existing documents. But I think it's a good starting point. We're missing things like:
1. Reliable deep linking to documents. We have ids, YouTube timestamps, etc., but finding these is an ad hoc process and they aren't always available.
2. Widespread transclusion tools. We actually have some now, in the form of link previews or OEmbed. When you post a link in a comment or post on Twitter or Facebook, they effectively transclude the link into the document. Not fully interactive, but it might be a better balance between linking and viewing than traditional/literal transclusion.
3. Discovery of these annotations or commentary. There's a hard CS problem here, to maintain privacy while also trying to find serendipitous results. Maybe it involves pre-loading lists of documents from the locations you want to "discover" from. Maybe it requires some understanding of privacy levels, or whether content is personalized or public. Or we use the technique we have now: lead with commentary, with no attempt to discover it after the fact. I.e., I know there are comments on https://www.reinterpretcast.com/open-hypermedia at https://news.ycombinator.com/item?id=17690865 because I found the document on https://news.ycombinator.com/news – is serendipity even a thing in a place as large as the web?
4. Maybe publishing tools... do I want to post a Tweet to describe every relation I see? But maybe I do, because even if organic discovery is possible I probably also want to publish a feed of my own annotations, and I want to be part of a community of people doing this, and Twitter is a reasonable example of this.
5. Some sort of representation of these links when they've been found. Even without fancy discovery this is necessary. Right now if I click on a link from a post like: "OMG this is the stupidest argument ever: http://example.com/some-stupid-document" it will look like any other page I've opened. Only if I remember well why I clicked on the link will I understand that I've been offered something with derision. The browser has to do something here, all it has currently is the back button to understand why you've gotten somewhere (and that doesn't even work consistently in these cases).
oh boy it's the semantic web all over again. it's appalling the lack of citations of the copious corpus that exists and the fact that it's never named for what it is.
"what is really lacking — in my view — is research considering the human factors at play"
there you go, if someone is interested in the topic, some citation back from 2005 which should be enough to find more references and research http://kmr.nada.kth.se/papers/SemanticWeb/HSW.pdf (they even have a workable concept browser, go figure)
To be clear, I’m familiar with the semantic Web and did a reasonable chunk of reading about it when doing this research, but view it as only tangentially related to the ideas I talk about here. If you’re looking for citations around this work, check the full dissertation — there are plenty.
Thanks. It's not clear in the article that this is based on another work. Can you provide a link? Reading the article I too thought linked data was noticeably absent.
and this is as novel way to interact with people on internet using concepts delivered in words, only tangentially related to a comment on a discussion board.
One way would be to create a browser/portal(combo) that only indexes webgl/wasm apps for example. and you could still visit the info page about that app via a normal browser but would have to install the specific browser.
the irony is that walled garden might have a valid use case (not to wall from a vendor, but to wall us from old tech)
Every year for the last 25 or 30 I see this kind of thinking about "information processing" show up.
What it represents, is a gigantic failure of computer science departments world wide not connecting their theories of information with the department of education's theories of information.
Most techies who mentally masturbate about how information should be organized and optimally consumed to maximize the production of good outcomes have never heard of the word pedagogy.
Without understanding that complex topic, they spend their time busy producing articles and collecting them in libraries that only they can navigate. They do this scratching their head wondering why it isn't creating global enlightenment. Ever stuck in some fools quest for a better magical library that will inject wisdom automatically into their heads.
After they hear of pedagogy and after they read a couple of text books on how to turn a first grader into a tenth grader they finally understand the difference between a library and school. They then proceed to think up ways of converting the web (a library) into a school. Most of the time not even fully aware what they are attempting.
And thats why it always fails. Schools have already been invented. They already exist. They are constantly evolving. And they will always be better than a library at producing information processing in the human mind. Every first grader knows not to walk into a tenth grade class room and try to solve the problem on the board there. Now step back and take a moment to think about why that automatically doesn't happen on the web?
And what the consequences are of first grader constantly exposed to problems of all sorts of grade levels without any indicator of grade or path to that grade. Naturally these first graders get it into their head there is something very wrong with the web.
If you want to "improve the web" understand pedagogy.
> The Web is, without a doubt, the most powerful research tool currently available to man. No longer must researchers comb through endless indices and catalogues to find what they are looking for.
True, but most people aren't researchers. Heck, I think most people don't even know what indices are :)
> The vast majority of those interested in a piece of work are merely readers, unable to contribute, only to consume.
Guess what, most people, 99% of the time, "consume".
> Billions across the globe rely on the Web to enhance their intellectual capabilities on a daily basis, building understanding through its rich mesh of connections.
Not really, billions across the globe check out funny cat pics, play games, watch you-know-what, etc. :)
Anyway, what I'm saying is: it's a nice vision of the world and the web, but that's not what the world mostly is. Good luck with it, but don't expect that a super contributor-friendly media will turn the vast majority of people into constant contributors.