Hacker News new | past | comments | ask | show | jobs | submit login
Project Xanadu (wikipedia.org)
132 points by alokrai on Feb 21, 2021 | hide | past | favorite | 82 comments



It was clear that the WWW was a good idea but it clearly wouldn’t succeed because of the foolish decision not to have back links. But it might gain a bit of traction and make people receptive for the real thing. I attended more than one significant academic conference in the early 90s where this belief was uttered to general agreement. People had even already experienced broken links and yet couldn’t put 2 and 2 together.

I also wanted bidirectional links even though I was a Lisp programmer! In case it’s not clear, one way links was an inspired decision.

Another belief, in the latter part of the 90s, was that decent web search was basically impossible as someone would have to store a copy of the whole thing, which is clearly impossible.


Another one is that Linux would be replaced by GNU Hurd when it's ready.

https://en.wikipedia.org/wiki/History_of_Linux#%22Linux_is_o...


Well, we still have time for this. Or has Hurd been finished in the meanwhile?


how ready is it?



How would you practically implement robust backlinks in a distributed hypertext system to begin with?


Technically they exist in HTML+HTTP via the (mispelled) `Referer` header. It sucks that cross-site Referer sharing is seemingly being phased out by browsers in the name of user-privacy (as referer URIs do often contain PII), but there are more cynical explanations afoot which I won’t go into.

Another approach is having a central link-repository that works as an overlay on the fully distributed web - this could be provided by search-engines.

Another approach is a distributed link database. I’m sure some clever DHT system could be used to allow practically anyone to participate without needing nodes to hold a copy of the full database (a-la Bitcoin), or even one that allows directly-relevant nodes (e.g. a website’s own node) to stores only it’s own bi-di links and not waste resources on unaffiliated content.

Of course, all of this assumes that spam, trolling, griefing, etc don’t exist :)

For all the talk WWW gets about being a distributed system... it isn’t: it’s more of a federation of systems united only by their common use of HTML+HTTP and (hopefully) RESTful resource architecture. A true distributed WWW would be some horrible mix of IPFS, BitTorrent and Ethereum. I think that’s how Roko's Basilisk will come into existence...


Okay, "decentralized hypertext", not distributed. Same problem still exists: servers that run an HTTP service would need to accept and track backlink pings. This was done a decade and change ago with Movable Type's trackbacks feature and that was, as you guessed, a massive spam vector.

You can, of course, construct a backlink system by indexing pages. That's basically how Google works and is probably the only use case I can think of where backlinks help more than they hurt. In fact, your distributed link database is basically half of what you'd need to build a search engine. The other half being, of course, an oppressive antispam system. The only reason Google works at all is because they frequently adjust their ranking algorithms or shove them into piles of linear algebra nobody can comprehend. No clue how you do that in a distributed manner, though.


> That's basically how Google works and is probably the only use case I can think of where backlinks help

Which also opened up for a massive spam problem, whic is obvious in retrospect. The only reason Google sort-of works is because it is not a protocol but a centralized service, outspending bad actors with a massive amount of warm bodies involved.

It is a common theme of well thought out hypertext system, both Xanadu and the Semantic Web which both have had a massive amount of research poured over them, that they aren't designed to operate in an adversarial environment.

Trustless systems such as the bare bones hypertext that is the original web didn't have this problem, in part because it wasn't ambitious enough. This can be seen to validate worse-is-better, or how vast permissionless systems can scale, or simply a stroke of luck. No matter in which perspective, it was a massive success.


> No clue how you do that in a distributed manner, though.

Distributed blockchains (Bitcoin, Ethereum, etc) solved this with Proof-of-Work, which is derived from the concept of an "e-mail postage stamp" to fight spam by forcing SMTP senders to compute an expensive hash. Similarly, if each backlink "cost" a PoW unit to earn Googlejuice it would probably be a useful metric again.

...or not: because that means indirectly placing a dollar-amount on a backlink, and well-funded marketing companies are more-than-willing to shell-out for that. Just an idea though...

Anyone want to start a "Xanadu 2: Electric Boogaloo" with a PoW algorithm?


I have no faith in "blockchain" as a component of a distributed system. Specifically: any system where mutations are guarded behind some thing of value as a consensus mechanism will trend towards centralization. (Hash chains are perfectly fine otherwise.)

PoW is not a distributed system in the slightest. The moment someone creates an accelerator for it, it's done. The set of people who can meaningfully interact with the system as a distributed system shrinks to the very small set of people willing to spend loads of money on ASICs; and the far larger set of people with CPUs is entirely crowded out. "One CPU, one vote" becomes "one ASIC, one million votes".

PoS is at least less of an environmental catastrophe, but it now explicitly involves capital. And here's the problem: capital is inherently centralizing. The idea that we can decentralize or distribute capitalism is absurd, because that's not how economies of scale work. Hell, the original distributed system, free markets, have been routinely subverted time and time again to the interests of those who already have capital in those markets.

I know of no other proof-of system other than work and stake; and both are hilariously inadequate for their intended use cases.


> This was done a decade and change ago with Movable Type's trackbacks feature and that was, as you guessed, a massive spam vector.

This is not an insurmountable problem. You can choose to accept trackbacks only from sites that can point to endorsement by some mutually trusted 3rd party. This is ultimately how the Fediverse works - bad actors will get booted from the most popular networks and find themselves unable to 'push' content to the network. And modern federated-web standards include the "trackback" case, e.g. via WebMention.


> You can choose to accept trackbacks only from sites that can point to endorsement by some mutually trusted 3rd party. This is ultimately how the Fediverse works - bad actors will get booted from the most popular networks and find themselves unable to 'push' content to the network.

I suppose this would have worked in the 2005-ish era when MovableType, Pingbacks/Linkbacks, and Web 2.0 was all the rage, but in today's internet culture and its very partisan politics (...of a distinctly American flavour...) permeating everywhere, the concept of "endorsement", even in a technical sense, can be framed as a political gesture, even if the basis of an endorsement can be defended on purely technical grounds. We saw this happening with the "Twitter Verified" badges controversy (as the identity-verification of an account is a purely technical and factual thing, but because Twitter chose to use the imagery-of-endorsement to indicate verified accounts it led to... well, that's another discussion). My point being that even if a pingback/linkback/trackback system can accurately filter-out (commercial) spam, maintainers of those websites would likely still want to curate pingbacks anyway - which means still having to actively moderate/audit that, and who wants to spend their time doing that?


You misspelled misspell. (Sorry.)


Ha, trying to Americanify "misspelt".


> but there are more cynical explanations afoot which I won’t go into.

Can you expand on this?


From trying to read about this years ago, I always remember the "Ent" and the "Enfilade" being Ted Nelson's abstractions for supporting bi-directional linking

https://en.wikipedia.org/wiki/Enfilade_(Xanadu)


Give each document a unique identifier linked to a namespace (the domain or website-url) and let the crawlers do their work.

A bit more optimizations here and there might be benefitial, like special support from browsers, a versionized linking-descriptopn of your namespace to allow local building of the real ressource-endpoint and such things.


Do we have robust forward links or is this a burden only for back links?


No, we get 404s from broken forward links, either because the target went away, or because of a typo in the link.

I can link to any page I like without having to talk to or get permission from anyone at all. While the person who manages the target of that link can decide on their own how to respond when the link is dereferenced.

Allowing this to happen, and making 404 perfectly acceptable was, in retrospect, an excellent idea.


Are there any screen-sharing VNC-like things that allow jumping via links from one server to another (the way that HTML links allow jumping from one document to another)?

I'm sure it'd be a security/usability nightmare, but wonder if it might be an interesting nightmare.


Given that dereferencing a URI can result in program execution rather than simple transmission of a document (or, if you prefer, you can ssh to a host and then ssh thither) I’d say yes.

A host is a pretty lightweight concept these days (so lightweight and often ephemeral that we don’t use the term much and rarely even give them names). CL/DM was written in the mainframe era.


Yeah, as for program execution, I was specifically thinking of VNC servers that are the equivalent of CGI in that they produce "displays" that never occurred in any framebuffer. Hyperlinks would allow networks of this kind of virtual VNC...


There is an amazing indy video game called "Kentucky Route Zero" which is an adventure esque game where you explore a mysterious magical road in Kentucky and meet all sorts of fantastical people and creatures. It is bizarre and the genre is probably best described as magical surrealism.

Anyway Spoilers when going through one of the caves, you meat these bizarre computer science researchers who have an old mainframe named Xanadu. It's all a reference to the actual project that shot for the moon and just kinda got left behind. Interesting historical tidbits shoved in the game.


Ah Xanadu! I always think of the 1995 "The Curse of Xanadu" in wired[1], both because it's good and because it's from an age when the idea of universal connection felt much further away.

[1] https://www.wired.com/1995/06/xanadu/


So, last time I mentioned that article on HN (and with the same enthusiasm you display) a nice fellow completed my post (https://news.ycombinator.com/item?id=15270358) with this:

> If that's the case could you please post Nelson's responses http://web.archive.org/web/20001101230424/http://www2.educ.k... http://web.archive.org/web/20001003011753/http://xanadu.com.... along with it.j

:)


Also, if anyone really wants to find out Nelson's thoughts on that article, he recently (Aug 2020) released a feature-length video addressing it and its lasting impact.

https://www.youtube.com/watch?v=ASgjSxNdDqI


Oh thanks! I didn't know that he'd responded.


Is it just me or does this article feel like it’s written in bad faith?

The adjectives used by the author seem unnecessarily insulting considering Ted is voluntarily taking part in the interview.

Just seems a strange vibe from the author reading that.


Ted Nelson’s response to wired is an interesting read: http://web.archive.org/web/20001101230424/http://www2.educ.k...


Nelson's "Computers for Cynics" is also an interesting take https://www.youtube.com/playlist?list=PLTI2Kz0V2OFlgbkROVmzk...


Yeah, don't use that. It's a sensationalist hit piece.

The reporter really misrepresented himself. It was kinda a James O'keefe style article.


Ted Nelson, in his own words, in a video by Notion: https://www.youtube.com/watch?v=JN1IBkAcJ1E


Wow that was a really good interview. I especially liked his poignant quote at the end.

    "It's been very lonely. but I've been absolutely sure of what I've been doing all the time. One of the main definitions of paranoia is believing what nobody else believes; so, one cure is for the patient to change his mind and believe what the rest of the world believes: that is the low road. the other, by which I hope to cure myself, is to persuade everybody else. And, then I will no longer be paranoid [...]"


Knowing nothing about this (I'd never heard of Xanadu before), it honestly sounds really sad.

Maybe this guy really is passionate and he's obviously driven, but based on my cursory skimming he seems a combination of depressed, stubborn, and delusional.

> In 2016, Nelson was interviewed by Werner Herzog in his documentary, Lo and Behold, Reveries of the Connected World. "By some, he was labeled insane for clinging on," Herzog said. "To us, you appear to be the only one who is clinically sane."[12] Nelson was delighted by the praise. "No one has ever said that before!" said Nelson. "Usually it's the other way around."

I am not commenting on whether or not he is working on something important. It just seems after so many decades of...very little, you may be so deep into the sunk cost fallacy that it's almost a miniature mental illness.

I wish him all the best, but this sounds like one of those really sad stories that you'll read 50 years after he's dead and just feel bad for the guy.


Apt username.

There is nothing sad about religious beliefs. We all have them but most people are not willing to own up to that. They'd rather just go with the flow and adopt the herd mentality.

If you have the conviction and strength of will to go your own way then that is worthy of admiration imo and I think that holds irregardless of the morality of the path chosen. However if the motivation is based in charity then doubly so.

From my point of view you are the 'sad' one (and me for caring enough to write this response). Judging people for dedicating their life to something they believe would lead to prosperity for others... it's so ugly. Show some respect.


> Show some respect.

The dude can do whatever he wants with his life. That doesn't mean I have to respect him for it. And is sympathy not a form of respect?

If he's happy then he's happy. It doesn't matter what I think anyway.

> There is nothing sad about religious beliefs.

I don't agree with this. Religious beliefs can lead to persecution, self-flagellation, terrorism, suicide for gay/trans people, abusing women...

> If you have the conviction and strength of will to go your own way then that is worthy of admiration imo and I think that holds irregardless of the morality of the path chosen. However if the motivation is based in charity then doubly so.

You're welcome to admire the guy. And I'm welcome to pity him. I think he's delusional and I do not consider that admirable at all.


I find it amusingly ironic how religious you are about your anti-religion sentiment.


I just don't think religions are inherently good. I also don't think they're inherently bad.


That statement seems to contradict your disagreement with the statement that there's nothing sad about religious beliefs.

I think you are in denial about holding a biased opinion.


Some religious beliefs are sad, not all of them.

Please be more careful to read what I write, and not what you want to see.


Transclusion is a great concept which I first time saw and used in Jacobson's Objectory software engineering tool. It's also available in the CrossLine information manager (see https://github.com/rochus-keller/CrossLine).


Transclusion is an appealing and beautiful seeming idea but it's an extremely difficult thing to implement. Even more, it's not at all evident in the end if it's desirable. Transclusion involves effectively conditionally including text or other fragments within documents.

It's easy for schemes of this sort to become fragile constructs. Moreover, the primary motivation for such an approach is copyright management - IE, fighting the "information wants to be free" tendency.


> it's not at all evident in the end if it's desirable

In the Objectory tool you could e.g. use it to re-use definitions along a specification. You could add a translucent link and show the text wherever needed without creating redundancy.


Transclusions are great indeed! With proper filer mechanisms it's amazing what you can do. Have a look a TiddlyWiki (https://tiddlywiki.com/), which is open source and essentially built on transclusions. It's a bit hard to explain, but when one gets one's head around the concept, it's extremely powerful and flexible!


Roam Research also has them.


Yep, and RR gives nice preview what web would look like with two-way links.


Just too bad it's also no open standard and for now in the cloud.

I tried in for a little product review I did of a friends app. Once I was done I found out there is no good way to export any of it to anything readable...

My choices where broken markdown or css-hacks for clean screenshots.


Today the available formats are Markdown, JSON and EDN.


JSON and EDN are not easily readable for humans.

Markdown would be ok if it would translate well, it's made to be rendered as HTML after all.

A new line in markdown however is not the same as a new line in roam, but in the export it is treated like it is. If you render the exported markdown as html you will have a lot of paragraphs broken (and maybe lists too, don't quite remember).

edit: look: roam | markdown | html https://i.imgur.com/2WGJGsY.png


In 1988, Autodesk (makers of AutoCAD) was so impressed by the Xanadu project that they gave them financial backing. After four years, Autodesk gave up:

> […] Come 1992, the “resources of Autodesk” were still funding “talent of the Xanadu team” which had not, as of that date, produced anything remotely like a production prototype—in fact, nothing as impressive as the 88.1x prototype which existed before Autodesk invested in Xanadu. On August 21, 1992 Autodesk decided to pull the plug and give its interest in Xanadu back to the Xanadudes.

(Quoted from footnote linked from this page: https://www.fourmilab.ch/autofile/e5/chapter2_64.html)


yeah, but I worked on a *nix workstation configured by Autodesk at that time, and the engineers that set it up casually mentioned that it had taken a month.. What that snippet says to me, plus the context at the time, was that is was deeply arcane to work on "servers" and that delays of weeks were common when "configuring" the thing. In fact, even booting hardware was a specialty, and a diversity of hardware existed that might surprise someone born in the 90s or later.

I was sympathetic to the "Xanadudes" but also personally saw the sort of unrealizable side to their particular efforts. "Tilting at Windmills" seemed to be a fitting description.



There's no GitHub because Ted doesn't believe in open source.

He's not on board with open systems at all

We don't talk anymore because of that disagreement. I tried to convince him to blow the thing open and tried to explain how that's how projects get accelerated. Nope, not happening.


I mean open systems would be fundamentally in tension with the whole idea of paid, proprietary transclusion that's one of the central pillars of Xanadu right?


I'd say to make transclusions with the micro transactions work it has to be an open standard. It needs browser support and people willing to use it encoding their work.


Ah sorry I don't mean the mechanism itself, I was referring to the content.

Although even in the case of the standard, leaving aside whether this would make Xanadu feasible or not, I'm not sure given Xanadu's history whether it was ever conceived of being "open" in the same way, say HTML is.

There are plenty of standards that are effectively closed and require payment to access and read (see e.g. ISO standards as well as, in the U.S. the state of Georgia very nearly getting away with putting legally-binding annotations behind a paywall: https://arstechnica.com/tech-policy/2020/04/supreme-court-ru...). I think Xanadu's philosophy would've lent it to a similar approach, independent of what this would mean for its survival.


That you've got to pay for a copy of many (but not all [1]) ISO standards doesn't make them closed. Lack of funding for standardization work and the expectation that everything must be had at no cost is what gets you monopolies and dominating players, such as with so-called web standards. Nothing new really; citing from a post by Paul Prescod from 1997 in the context of subsetting XML from SGML [2]:

> Are you happy with the process for developing and improving HTML? Do you feel that the results are of high quality? Do you think that you've had sufficient input? [...]

> In order to influence ISO standards you need only be recognized as an expert in your country. Unless your country is an oligarchy or dictatorship, this will cost you very little or nothing at all [...]

[1]: https://standards.iso.org/ittf/PubliclyAvailableStandards/in... (beware comically broken page on mobile)

[2]: http://lists.xml.org/archives/xml-dev/199710/msg00189.html


The thing is, ISO isn't charging a fee with the idea that they can make money selling the standards. Hell, most standards they publish don't really have a monetization plan beyond "make all the relevant member organizations happy". The one exception is video, and that's not even money that ISO MPEG (or ITU VCEG) even sees. It all bypasses the standards orgs entirely and goes straight to patent owners through a clusterfsck of patent pools.


Here's another comment I wrote in the HN discussion from a couple years ago about "Ted Nelson on What Modern Programmers Can Learn from the Past [video] (ieee.org)", in which James Clark talked about his role in the transition from SGML to XML, and the value of standards being sufficiently simple to have multiple interoperable implementations:

IEEE Article and video about "Ted Nelson on What Modern Programmers Can Learn from the Past":

https://spectrum.ieee.org/video/geek-life/profiles/ted-nelso...

HN discussion:

https://news.ycombinator.com/item?id=16222520

My comment and quotes from the DDJ interview of James Clark, "The Triumph of Simplicity":

https://news.ycombinator.com/item?id=16227249

In the ideal world we would all be using s-expressions and Lisp, but now XML and JSON fill the need of language-independent data formats.

>Not trying to defend XSLT (which I find to be a mixed bag), but you're aware that it's precursor was DSSSL (Scheme), with pretty much a one-to-one correspondence of language constructs and symbol names, aren't you?

The mighty programmer James Clark wrote the de-facto reference SGML parser and DSSSL implementation, was technical lead of the XML working group, and also helped design and implement XSLT and XPath (not to mention expat, Trex / RELAX NG, etc)! It was totally flexible and incredibly powerful, but massively complicated, and you had to know scheme, which blew a lot of people's minds. But the major factor that killed SGML and DSSSL was the emergence of HTML, XML and XSLT, which were orders of magnitude simpler.

James Clark:

http://www.jclark.com/

https://en.wikipedia.org/wiki/James_Clark_(programmer)

There's a wonderful DDJ interview with James Clark called "A Triumph of Simplicity: James Clark on Markup Languages and XML" where he explains how a standard has failed if everyone just uses the reference implementation, because the point of a standard is to be crisp and simple enough that many different implementations can interoperate perfectly.

A Triumph of Simplicity: James Clark on Markup Languages and XML:

http://www.drdobbs.com/a-triumph-of-simplicity-james-clark-o...

I think it's safe to say that SGML and DSSSL fell short of that sought-after simplicity, and XML and XSLT were the answer to that.

"The standard has to be sufficiently simple that it makes sense to have multiple implementations." -James Clark

My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!

Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.

XSLT Stylesheet Scripting Using <msxsl:script>:

https://docs.microsoft.com/en-us/dotnet/standard/data/xml/xs...

Excerpts from the DDJ interview (it's fascinating -- read the whole thing!):

>DDJ: You're well known for writing very good reference implementations for SGML and XML Standards. How important is it for these reference implementations to be good implementations as opposed to just something that works?

>JC: Having a reference implementation that's too good can actually be a negative in some ways.

>DDJ: Why is that?

>JC: Well, because it discourages other people from implementing it. If you've got a standard, and you have only one real implementation, then you might as well not have bothered having a standard. You could have just defined the language by its implementation. The point of standards is that you can have multiple implementations, and they can all interoperate.

>You want to make the standard sufficiently easy to implement so that it's not so much work to do an implementation that people are discouraged by the presence of a good reference implementation from doing their own implementation.

>DDJ: Is that necessarily a bad thing? If you have a single implementation that's good enough so that other people don't feel like they have to write another implementation, don't you achieve what you want with a standard in that all implementations — in this case, there's only one of them — work the same?

>JC: For any standard that's really useful, there are different kinds of usage scenarios and different classes of users, and you can't have one implementation that fits all. Take SGML, for example. Sometimes you want a really heavy-weight implementation that does validation and provides lots of information about a document. Sometimes you'd like a much lighter weight implementation that just runs as fast as possible, doesn't validate, and doesn't provide much information about a document apart from elements and attributes and data. But because it's so much work to write an SGML parser, you end up having one SGML parser that supports everything needed for a huge variety of applications, which makes it a lot more complicated. It would be much nicer if you had one SGML parser that is perfect for this application, and another SGML parser that is perfect for this other application. To make that possible, the standard has to be sufficiently simple that it makes sense to have multiple implementations.

>DDJ: Is there any markup software out there that you like to use and that you haven't written yourself?

>JC: The software I probably use most often that I haven't written myself is Microsoft's XML parser and XSLT implementation. Their current version does a pretty credible job of doing both XML and XSLT. It's remarkable, really. If you said, back when I was doing SGML and DSSSL, that one day, you'd find as a standard part of Windows this DLL that did pretty much the same thing as SGML and DSSSL, I'd think you were dreaming. That's one thing I feel very happy about, that this formerly niche thing is now available to everybody.


Yeah sure.

This is still pretty nascent distinction I'm thinking of but I think there's at least 2 types of innovators.

Those that can delegate and cede the process of new ideas and fundamental control and know others will act in good faith and shepherd things to the finish line. Examples are TBL and Torvalds, who may be a tough cookie, but isn't involved in everything; IBM, Nvidia and Google do their thing.

Then there is the other kind. There's not many kind words for these folks. They'll never delegate creativity or control, only tasks.

Xanadu ideally, and at a previous time in actual documents, was envisioned as a proprietary publishing empire, you know as if you'd get a mythical Alan Kay Xanadu Dynabook that'd have all the dreams manifest.

Sometimes these people are also really good and can make it happen, like Steve Jobs. But most of the time, they aren't good at something and not willing to cede it, something critical. Other times people aren't willing to do the work, it's too much of a barrier.

These types of people can succeed marvelously in smaller ideas. The perfectionist restaurant where the all star chef controls everything is basically every michelin star restaurant explained.

Same goes for great movies or music. Michael Jackson and Hitchcock were supposedly like this.

It's just a harder model to make work for big ideas. So much harder it usually just can't be done.

I really want a Nelson system but unless he can move himself to the TBL column and give way on crucial tenants, I really don't see it happening and having large adoption.

These personality types seem distrustful of people like me, as if I'm going to steal their valor. I have invariably failed to explain that no, I want them to succeed, give me 0 dollars and 0 credit.

I usually recommend don't even cede to me, but to some other, brilliant person, that I have no connection with. However, that's not compatible with their model of human motivation so their distrust becomes further entrenched, usually with me AND the recommendation. It's ugly.

I don't know how to work with these brilliant people.

For instance, Ted's micropayment system is essentially what Eich achieved using his BAT cryptocurrency in the Brave browser. He ceded massive controls to make it happen. Now places like Wikipedia and the Internet Archive are accepting BATs.

A true Nelsonite would debate with me how Eich's BATs are no true Scotsman, and that's the problem. They're close enough damn it, let it go. You aren't always going to get a bull's-eye on the dart board of life. Make it a stepping stone, a pipeline to the true life, whatever, just don't dismiss the good as the enemy of the perfect. I'm sure the first iphone and first ipod wasn't all of Steve's ideas made manifest. Heck, if he didn't have a perfect favorable storm with Microsoft floating Apple during their late 90s antitrust, none of Apple's second spring would've happened either.

Nelson's ideas work, that's what TBL proved with the w3, he just needs to structure them for success.


What you are describing is basically auteur theory. Though, I would argue these types of people are not worried about someone stealing their valor. It's mostly the worry that upon ceding, the original vision will be changed/corrupted in some way. And in my opinion, that fear is definitely justified, but it often comes to the detriment of progress.


I wrote about this on HN a few years ago, and Dave Winer's Userland Frontier discussion group a couple decades ago, after Xanadu released some open source code, which was actually the output of a Smalltalk=>C++ transpiler. (That code was actually from a team Autodesk, not directed by Ted Nelson -- see his reply below.)

https://news.ycombinator.com/item?id=16224154

I think his biggest problem is that he refuses to collaborate with other people, or build on top of current technology. He's had a lot of great important inspirational ideas, but his implementation of those ideas didn't go anywhere, he's angry and bitter, and he hasn't bothered re-implementing them with any of the "inferior technologies" that he rejects.

Back in 1999, project Xanadu released their source code as open source. It was a classic example of "open sourcing" something that was never going to ship otherwise, and that nobody could actually use or improve, just to get some attention ("open source" was a huge fad at the time).

http://www.theregister.co.uk/1999/08/27/web_precursor_xanadu...

>Register believe it or not factoid: Nelson's book Computer Lib was at one point published by Microsoft Press. Oh yes. ®

They originally wrote Xanadu in Smalltalk, then implemented a Smalltalk to C++ compiler, and finally they released the machine generated output of that compiler, which was unreadable and practically useless. It completely missed the point and purpose of "open source software".

I looked at the code when it was released in 1999 and wrote up some initial reactions that Dave Winer asked me to post to his UserLand Frontier discussion group:

http://static.userland.com/userlanddiscussarchive/msg010163....

http://static.userland.com/userlanddiscussarchive/msg010164....

http://static.userland.com/userlanddiscussarchive/msg010165....

http://static.userland.com/userlanddiscussarchive/msg010166....

http://static.userland.com/userlanddiscussarchive/msg010167....

A few excerpts (remember I wrote this in 1999 so some of the examples are dated):

>Sheez. You don't actually believe anybody will be able to do anything useful with all that source code, do you? Take a look at the code. It's mostly uncommented glue gluing glue to glue. Nothing reusable there.

>Have you gotten it running? The documentation included was not very helpful. Is there a web page that tells me how to run Xanadu? Did you have to install Python, and run it in a tty window?

>What would be much more useful, would be some well written design documents and port-mortems, comparisons with current technologies like DHTML, XML, XLink, XPath, HyTime, XSL, etc, and proposals for extending current technologies and using them to capture the good ideas of Xanadu.

>Has Xanadu been used to document its own source code? How does it compare to, say, the browseable cross-referenced mozilla source code? Or Knuth's classic Literate Programming work with TeX?

>Last time I saw Ted Nelson talk (a few years ago at Ted Selker's NPUC workshop at IBM Almaden), he was quite bitter, but he didn't have anything positive to contribute. He talked about how he invented everything before anyone else, but everyone thought he was crazy, and how the world wide web totally sucks, but it's not his fault, if only they would have listened to him. And he verbally attacked a nice guy from Netscape (Martin Haeberli -- Paul's brother) for lame reasons, when there were plenty of other perfectly valid things to rag the poor guy about.

>Don't get me wrong -- I've got my own old worn-out copy of the double sided Dream Machines / Computer Lib, as well as Literary Machines, which I enjoyed and found very inspiring. I first met the Xanadu guys some time ago in the 80's, when they were showing off Xanadu at the MIT AI lab.

>I was a "random turist" high school kid visiting the AI lab on a pilgrimage. That was when I first met Hugh Daniel: this energetic excited big hairy hippie guy in a Xanadu baseball cap with wings, who I worked with later, hacking NeWS. Hugh and I worked together for two different companies porting NeWS to the Mac.

>I "got" the hypertext demo they were showing (presumably the same code they've finally released -- that they were running on an Ann Arbor Ambassador, of course). I thought Xanadu was neat and important, but an obvious idea that had been around in many forms, that a lot of people were working on. It reminded me of the "info" documentation browser in emacs (but it wasn't programmable).

>The fact that Xanadu didn't have a built-in extension language was a disappointment, since extensibility was an essential ingredient to the success of Emacs, HyperCard, Director, and the World Wide Web.

>I would be much more interested in reading about why Xanadu failed, and how it was found to be inadequate, than how great it would have been if only it had taken over the world.

>Anyway, my take on all this hyper-crap is that it's useless without a good scripting language. I think that's why Emacs was so successful, why HyperCard was so important, what made NeWS so interesting, why HyperLook was so powerful, why Director has been so successful, how it's possible for you to read this discussion board served by Frontier, and what made the World Wide Web what it is today: they all had extension languages built into them.

>So what's Xanadu's scripting language story? Later on, in the second version, they obviously recognized the need for an interactive programming language like Smalltalk, for development.

>But a real-world system like the World Wide Web is CONSTANTLY in development (witness all the stupid "under construction" icons), so the Xanadu back and front end developers aren't the only people who need the flexibility that only an extension language can provide. As JavaScript and the World Wide Web have proven, authors (the many people writing web pages) need extension languages at least as much as developers (the few people writing browsers and servers).

>Ideally, an extension language should be designed into the system from day one. JavaScript kind of fits the bill, but was really just nailed onto the side of HTML as an afterthought, and is pretty kludgey compared to how it could have been.

>That's Xanadu's problem too -- it tries to explain the entire universe from creation to collapse in terms of one grand unified theory, when all we need now are some practical techniques for rubbing sticks together to make fire, building shelters over our heads to keep the rain out, and convincing people to be nice and stop killing each other. The grandiose theories of Xanadu were certainly ahead of their time.

>It's the same old story of gross practicality winning out over pure idealism.

>Anyway, my point, as it relates to Xanadu, and is illustrated by COM (which has its own, more down-to-earth set of ideals), is that it's the interfaces, and the ideas and protocols behind them, that are important. Not the implementation. Code is (and should be) throw-away.

>There's nothing wrong with publishing old code for educational purposes, to learn from its successes and mistakes, but don't waste your time trying to make it into something it's not.

Ted replied to the HN thread:

>The 1999 "source code" referred to above is in two parts: xu88, the design my group worked out in 1979, now called "Xanadu Green", described in my book "Literary Machines"; and a later design I repudiate, called "Udanax Gold", which the team at XOC (not under direction of Roger Gregory or myself) redesigned for four years until terminated by Autodesk. That's the one with the dual implementation in Smalltalk. They tried hard but not on my watch. Please distinguish between these two endeavors.


There's a tendency of ideologues to value ideological purity over fitness for use.

This is fine as long as people acknowledging their pure thought stuff is malleable.

One of Ted's memorable issues was in say indexing documents. He's wanted the markup to exist in a separate document that was byte addressable, not inline. So a separate index file.

I asked him "what about locales? Some systems are UTF 16." And you know, instead of I dunno, admitting that not doing a packet, block, or inline format was a mistake, he just insisted it'll be handled later. I asked him about cadence, how will you do versioning between the two, things are grossly inefficient if you change one byte, etc..

One of the other nice things of inline systems is you can get a partial file and still have all the context. With that system however, corruption is way too easy.

There's all these technical requirements to his grand idea that he asserts are completely absolutely necessary and have no technical equivalencies at all and do not have the obvious design and implementation limitations that can easily be overcome with very slight permutations in a way that still completely satisfies all practical usecases.

And he loves saying he's not a programmer. It's like come on Ted, trust the people who say you're micromanaging of implementation is erroneous. It's been like 50 years, some of your stuff doesn't work.

He shouldn't be bitter. Everyone technical in the field knows him. He's up there with Stallman, Knuth, and Shannon in name recognition. I probably won't even appear as a footnote no matter how hard I work. So yeah, mission fucking accomplished already Ted. Geez.


It's funny because he said it would be open at first. How did you know or come to work with him?


If curious, past threads:

Xanadu Basics – Visible Connection (2018) [video] - https://news.ycombinator.com/item?id=20258942 - June 2019 (25 comments)

“Xanadu Hypertext Documents” architecture and data structures, 2019 edition - https://news.ycombinator.com/item?id=19745517 - April 2019 (1 comment)

Project Xanadu - https://news.ycombinator.com/item?id=19710142 - April 2019 (32 comments)

Ted Nelson's Pre-Final Reply to “The Curse of Xanadu” by Gary Wolf / Gory Jackal - https://news.ycombinator.com/item?id=19672373 - April 2019 (1 comment)

Getting to Xanadu - https://news.ycombinator.com/item?id=18635123 - Dec 2018 (55 comments)

Xanadu - https://news.ycombinator.com/item?id=15269827 - Sept 2017 (86 comments)

Ted Nelson presents a working prototype version of Xanadu - https://news.ycombinator.com/item?id=12386339 - Aug 2016 (1 comment)

Roads to Xanadu - https://news.ycombinator.com/item?id=10642143 - Nov 2015 (6 comments)

The Xanadu Parallel Universe - https://news.ycombinator.com/item?id=10470068 - Oct 2015 (2 comments)

Xanadu: we have a working deliverable - https://news.ycombinator.com/item?id=7849389 - June 2014 (99 comments)

The Curse of Xanadu (1995) - https://news.ycombinator.com/item?id=4160525 - June 2012 (2 comments)

The Curse of Xanadu (1995) - https://news.ycombinator.com/item?id=1583311 - Aug 2010 (6 comments)

How Xanadu Works: technical overview - https://news.ycombinator.com/item?id=962315 - Nov 2009 (5 comments)

The Xanadu Dream - https://news.ycombinator.com/item?id=876469 - Oct 2009 (15 comments)

The Curse of Xanadu - https://news.ycombinator.com/item?id=795155 - Aug 2009 (18 comments)

Others?


Not only is the Xanadu system fascinating, but the odyssey of its on-and-off development should be required reading for any software professional.



I completely reject Project Xanadu, for one primary reason: it turns an open web into a per-page paywall DRM horror. And where money comes, user tracking and anti-privacy tech follows.

Right now, it takes time and effort to paywall stuff. It's not super hard, but is not a seamless flip-a-switch-on paywall.

With Xanadu, that paywall is built in at the core - request a page, pay the cost, get the page and hope its what you want. And I can only imagine the level of scrape-scams at that level - spam would be not only viable but make money per click.

Sure, I could imagine useful tools to prevent hemorrhaging money, but the threat of clicking links and owing 1-10$ for that is horrifying.

Hard pass. It does deserve a study in its technology and closed-sourceness when it was devised. But it needs to stay in the scrap-bin of history.


I don't remember DRM being mentioned in all of the Ted Nelson stuff I've read - I think the idea he had was that just having microtransactions and transclusion would be enough to put copyright concerns aside. This is a sentiment that has aged like a fine milk, of course.

In the Xanadu world, every new book published would be posted online for purchase, but also transcludable; you could remix the work and the transclusion system would ensure the publisher got paid for the part of the work you used by having the system just pass the payments along across each transcluded work. Effectively, you'd publish your remix in the form of a Git diff or IPS file (Ted used the metaphor of Edit Decision Lists) and your browser would just buy the pages of the book that were copied from and paste them into the remixed work.

This is, of course, not how copyright works in the slightest. First off, most people with valuable works do not want to automatically authorize remixes or modifications of their work even if they do get paid. They want to have prior restraint and exclusive license agreements with specific users of their work for the sake of brand maintenance. There is no legal basis for transclusive remixing of content (see Micro Star v. Form Gen). On the flip-side, if someone were to write, say, a review of a work; the copyright owner is entitled to no control or remuneration whatsoever for any quotations you use in that review. That's called "fair use" and it can only be adjudicated by a judge, not a computer program.

BTW, what I'm talking about already exists, it's called YouTube Content ID. YouTube runs literally every upload they have against a database of copyright samples to effectively generate a list of transclusion backlinks after-the-fact. All the problems I mentioned above are problems with YouTube Content ID, not Xanadu - there are plenty of publishers whose policy is "block", and it is their right to do that; making a fair use of content will trip the filters and you have to argue with YouTube to do what you are legally entitled to do, etc. It is entirely an engineer's view of copyright law, which is just laughably bad.


If micropayments, versioning, and bidirectional links are to be mandatory there must be some kind of DRM-like system enforcing that. It's not said because it goes without saying. Of course Nelson pitched this as "for your own good" but so did the RIAA/MPAA.


I’m not seeing why. You purchase a copy of the datum. If someone links to you, they pay a micropayment and ontain their own copy, which they can then sell. It “should” be trivial to set your micropayment to zero if you want.


Because purchasing digital stuff is done within the realm of capitalism, which uses scarcity to guarantee value. If the scarcity is removed or reduced, the price goes down approaching to 0. We saw cost reductions happen with music with Napster, and movies with Bittorrent.

And the way to enforce scarcity with digital stuff means anti-user encryption and rights-reduction with DRM.

And without DRM, a single user "buys" the content, breaks the connection with transclusion, and uploads it for cheaper than legit, or free. Then you're making money on copyright violations - and do you really think the powers that be would allow this system with anonymous accounts/payments/withdrawal? (And there's your user tracking, user attestation, and anti-privacy tech... And just imagine if this account was cancelled due to piracy - ouch.)


> Because purchasing digital stuff is done within the realm of capitalism, which uses scarcity to guarantee value. If the scarcity is removed or reduced, the price goes down approaching to 0. We saw cost reductions happen with music with Napster, and movies with Bittorrent.

Yes, I believe this is why they are called “micropayments.” Its expected that the price will approach zero.

> And the way to enforce scarcity with digital stuff means anti-user encryption and rights-reduction with DRM.

I’m sorry, this seems like a non sequitur. Why are they trying to maintain scarcity again?

> And without DRM, a single user "buys" the content, breaks the connection with transclusion,

Then how would people know the version of the content they have is the same as the content they want?

> and uploads it for cheaper than legit,

You misunderstand. There is no such thing, they own the content, the price they charge for transcluding their copy is their decision. They’re allowed to charge a low price.

> Then you're making money on copyright violations

So its clear that either the law would have to change or people wouldn’t be using this for copyrighted works. This could replace copyright by giving content creators an alternate means of distribution.

> and do you really think the powers that be would allow

No you’re right, might as well just give up and go back to facebook and youtube

> and do you really think the powers that be would allow this system with anonymous accounts/payments/withdrawal? (And there's your user tracking, user attestation, and anti-privacy tech

So basically no problems with Xanadu that you can identify, but a lot of assumptions about society and politics that you believe would make it untenable. Got it.


The Web already has paywalls. Something like xanadu would be an advantage, as at the moment everyone is doing their own poor solution of what xanadu could deliver in a higher quality.

And I also think that xanadu would be impossible to scale up to global level anyway. It would only be another walled garden living on the internet, similar to how there are already wikis and their interwiki-connectivity.


> With Xanadu, that paywall is built in at the core

HTTP also has builtin paywalls via the 402 error code. I'm not sure that there's much practical difference.


You mean this one? The one that no browser actually does anything with?

https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/402

"The HTTP 402 Payment Required is a nonstandard client error status response code that is reserved for future use."

Of course, I guess "this is actually just a concept now but we promise it will be meaningful in the future" is actually something HTTP 402 has in common with Project Xanadu. (zing!)


Existence of 402 doesn’t really say much. 418 and 451 are probably just as popular error codes as 402, but nobody would argue any of these codes are a material part of the HTTP core. Moreover, there’s no special functional purpose to 402, not in the RFC and not real life.


Isn't this something like Roam/Obsidian, just more complicated&academic to the point of unimplementability?


An alternative (e.g. Nelson's) point of view would be that Roam/Obsidian is a dumbed down and crippled version of his vision. But yes, hard to deny the point about unimplementability


I did not see this article directly linked in the comments: https://www.wired.com/1995/06/xanadu/


did anyone tried putting xanadu on a blockchain

with ICO and all




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: