A veritable hero of our times boasts a mere 688 followers on his Twitter account: https://twitter.com/RobertMMetcalfe (as of the dispatch of this message).
It's more a personal account that's 1 year old which opens with a tweet on cancel culture followed by his political leanings, cryptocurrency posts, and an overwhelming amount of basketball stats. It does have the occasional post about his involvement in geothermal energy though but beyond that following the account isn't going to get you any content he's known and respected for.
If he was tweeting anecdotes, or still working (seems he might be?) and tweeting about it it'd probably be a lot more - but it's mostly US basketball (and other sports/personal stuff) by the looks of it, so however veritable a hero it's just not interesting to the same audience (or at least, for the same reason, of course some of us will be basketball fans) - and probably if you're big into 'basketball Twitter' he's ~'nobody'.
"Most ideas are mediocre down to bad" -- and mine certainly have been.
Beyond ideas, I would do some process things differently, especially those that could have been done better if I had been able to understand people better.
I had the great -- and lucky -- benefit of falling into a well established research community -- ARPA-IPTO -- in the 60s with roots going back into the 50s and 40s. They "knew how to make progress" and I learned most of what little I understand about "process" from growing up in it.
Both of them remind me Plan 9's "single most important feature"[1]:
> all mounted file servers export the same file-system-like interface, regardless of the implementation behind them. Some might correspond to local file systems, some to remote file systems accessed over a network, some to instances of system servers running in user space (like the window system or an alternate network stack), and some to kernel interfaces. To users and client programs, all these cases look alike.
And Rob Pike once said[2]:
> When I was on Plan 9, everything was connected and uniform. Now everything isn't connected, just connected to the cloud, which isn't the same thing. And uniform? Far from it, except in mediocrity. This is 2012 and we're still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary. I sorely miss the unified system view of the world we had at Bell Labs, and the way things are going that seems unlikely to come back any time soon.
Saying things were uniform when your entire interface is "bytes to and from a file descriptor" is just editing the past to me.
For most of these p9fs interactions, each one had a different format / API that you had to know in advance before reading or writing to the file. This meant many things just had custom ascii, others were binary. Some pushed elements into different file locations, some piled them all into a blob under the same file.
It makes oauth2 look decent.
Saying that you managed to push everything through a file interface is interesting, but it's no more or less connected than a world of http://
you're being deliberately obtuse. the uniformity in plan9 didn't mean everything spoke the same protocol, but that every resource could be shared everywhere. you wouldn't want your tv tuner card to talk ascii, but with 9p and a plan9 kernel you could access any tv tuner on the office network as if it was connected to your own machine. indeed on any other network that gave you could login to. that is the part that's sorely missed.
To me it sounds like codemac is being entirely sensible and pragmatic about what the limitations are. Pervasive "piping of bytes" is all well and good, but if you don't understand what the bytes mean, it doesn't really matter.
> you wouldn't want your tv tuner card to talk ascii, but with 9p and a plan9 kernel you could access any tv tuner on the office network as if it was connected to your own machine
Office network? Sure. Not try it over a WAN and suddenly it doesn't look so hot.
The thing is that once you're going over the network you really need to start thinking about the effects of latency and how to compensate for it or hide it. Network transparency is a foe in this (very common) scenario.
If I can get the damned TV onto the network at all, then I can use 9fs or any other protocol including http.
plan9 did not invent the network, or even the concept of shared resources. They focused on putting all resources into a filesystem API, entirely and exclusively. That's their innovation. I'm not being obtuse by pointing it out.
i'm not entirely sure about your timeline. plan9 came about in 1990. plenty of "sharing" back then was the unique property of distributed systems like amoeba.
On P9 at Bell labs, everything could be connected and uniform. In the real world with corporate firewalls, HTTPS is very nearly the only reliable option for connecting to other computers.
On top of that, the fact that most workstations in this world do not have public IP addresses, and p2p becomes really a non-starter.
This seemingly destructive segmentation of the Internet is precisely what will lead to its successor: a collection of "overlay networks" built on top of the existing IP stack. We're already seeing it in many ways, with Tor and VPNs as two commonly used examples. But overlay networks are not just advantageous for packet transport. They can represent any decentralized network. BitCoin is an overlay network, in that it's literally a payment network, is a payment network, built on top of existing IP infrastructure.
We are going to see more and more of these "overlays" as the Internet "re-decentralizes". Sure, what we know as "the Internet" is becoming increasingly centralized, and people are right to be concerned. But there is nothing to fear, so long as we can build more abstractions on top of the Internet as we know it. Let Amazon, Google, and the big "clouds" centralize as much of the Internet as they want. They are only digging their own grave, because the next revolution, the "Internet 2.0" will be as decentralized as its predecessor. Only instead of being built on the "dumb pipes" of copper and fiber, it will be built on the "dumb pipes" of the existing, increasingly centralized, Internet.
The more the Internet infrastructure consolidates itself, the more it provides an opportunity to re-decentralize on top of it.
What you say made me think of hamachi, a nice vpn overlay network for windows (at first) from the late 90s, and guess what happened to it?
> For paid subscribers Hamachi runs in the background on idle computers. The feature was previously available to all users, but became restricted to paid subscribers only [0]
You should check out the ipop project [0], the technology behind socialvpn and groupvpn. They have an active and extensive github. [1]
The problem with technology like Hamachi, and even IPOP, is that the nature of NAT traversal means that for any given p2p network, "supernodes" will always be required. IPOP uses STUN/TURN (the same technology as WebRTC) for NAT-traversal. According to Google usage stats (can't find the URL right now), about 10% of p2p connetions over STUN/TURN require relaying all packets over the TURN server. So that means for every ten nodes, at least one will need a supernode to connect to the others.
At first glance, the requirement of "supernodes" seems to impede the proliferation of true p2p networks, because it introduces a point of failure into the system controlled by one entity but relied upon by many. The problem is that "supernodes" cost money, and somebody needs to pay for them. So you end up with companies like LogMeIn, happy to provide a "p2p" solution, as long as you pay them to maintain the requisite supernodes.
However, there is no requirement that one centralized entity supply the "supernodes" in a network. It's 2016, we have "the cloud," and anyone can launch a "supernode," aka a cloud server with a public IP that can assist in NAT traversal.
I hope we will see more business models based around a combination of "p2p" and "federated" network architectures, where they are federated in the sense that a group of "super-peers" provides the "supernodes" required for functional NAT traversal on the rest of the strictly p2p network.
(Shameless plug: You should also check out my senior thesis, TorCoin: http://dedis.cs.yale.edu/dissent/papers/hotpets14-torpath-ab... -- I've been thinking about this a lot since then, but that was my first real exploration of the ideas I'm trying to express here.)
Interesting abstract, but what about just asking for donations?
Freenode does that, and so does public radio, but they seem uniquely positioned to do that. If I were running a bunch of STUN/TURN servers I wonder where I would ask for donations.
Makes me wonder how the Internet came to be. Something about DARPA funding?
The cost of a supernode is more than just monetary. There is a cost to the network when one single entity controls all the supernodes, because in almost all cases that means they have some disproportionate level of power over routing or quality of service. You wouldn't want one company running every Tor exit node, because then it's not decentralized.
A business model that relies on donations to one, or a few, entities controlling the supernodes perpetuates the centralization of infrastructure. Not to mention that donations are historically the most cost inefficient solution, especially compared to "market forces." If the efficiency of a decentralized system depends on the number and quality of supernodes, then some economic mechanism must exist to incentivize peers to become "super peers." The competition will ensure that the super peers who survive are the ones with the highest quality nodes at the lowest cost.
Then the question becomes, who pays the cost that gets distributed to the supernodes? In the case of TorCoin, we wanted to avoid the situation where people need to pay to join the Tor network. So our solution was a cryptocurrency with "proof of bandwidth" as its "proof of work." Just like a single Bitcoin represents some amount of CPU power that was expended to "mine" the BitCoin, a single TorCoin would represent some amount of bandwidth that was transferred to "mine" the TorCoin.
The idea was that a TorCoin would have value outside of Tor, and Tor was simply the mechanism used to mine it. So relay operators would mine TorCoins, then sell them on an exchange just like they would any other altcoin. This way they get value from providing bandwidth to Tor users, but the users do not need to pay for the service.
However this gets complicated very fast, because you're not talking about one person mining coins with one CPU, but pairs of people mining coins by transferring bandwidth between each other. So the priority of the paper was addressing the threat of collusion, where two malicious nodes (possibly controlled by the same actor) spoof terabytes of bandwidth between each other, flooding the market with TorCoins. Our key insight was "TorPath," a circuit selection mechanism that ensures every circuit participating in the TorCoin mining scheme is "privately-addressable but publicly verifiable." So each TorCoin (or part of a TorCoin) would represent a circuit that actually existed, and anyone could verify that via a public ledger, without revealing the identity of the circuit participants.
I think the idea of non-CPU "proof of work" schemes is very interesting in general, and bandwidth is one of the most profitable venues to apply it to. For example, imagine if the BGP system were operated along an incentive scheme like this, where instead of "circuits" we are talking about "peering." The path selection mechanism we devised would work in any routing system with multiple participants, not just Tor.
I'm really interested in hearing more about this. I mean "Internet 2.0".
Anh pointers on where I can find material that speak about the ideas you've expressed? Obviously I'll hit Google but you might just have something better.
First of all, you can just tunnel whatever storage protocol over HTTPS if needed. The point of the quote is that we could be using Operating Systems that abstract away the various kinds of storage available so that you can use whatever combination of options work for you.
It's like if everyone was using Linux and paid for plans from for example a cloud storage provider, but was able to reliably and automatically mount their storage like any other network share.
With open technology and widespread fast internet, it's really not that hard to accomplish. We've already got the former. Now we just need a lot more of the latter.
I've never been entirely clear whether this referred to 'everything within the Plan9 organisation' or 'everything visible from a Plan9 machine including external independent systems'. The former is much easier to achieve. So much of the nonuniformity is because of competing administrative domains.
What is new about this, is that it is none of the three things that your quote listed as possibilities. The new thing is a remote storage that has a smartly synchronized local "cache".
Nothing could prevent a remote filesystem to have a local cache, actually this is what every remote filesystem would implement quite soon.
The real novelty of Infinit is that you can mix and match multiple storages and build a unified filesystem with encryption on top of this. Roughly something like truecrypt over aufs over (sftp+local fs+s3fs+whateveryouwantfs)
Last year, I had a talk with Stephen Wolfram, he said that, for him, the most important secret helping him solve problems is "some version of confidence or arrogant". That make him never fear any difficulty, but this arrogant inevitably behave in some other aspects of his life, and affect people's impression of him.
Why doesn't that bother you? It reminds me a little of the giant middle finger that Bezos recently gave to his workers via his letter to shareholders, in which he effectively said that he knew the workplaces he fostered at Amazon were unhealthy, toxic, soul-crushing graveyards, but basically that he didn't plan on changing it and didn't give a shit if people didn't like it or even if it was demonstrably unhealthy for them.
Saying "I've become successful" (or in Wolfram's case, very marginally successful) "because of some arrogance" is not some redeeming, insightful virtue. In fact, in many ways the person is saying, "I took the lazy way of using my laurels and status to treat others badly and act entitled to my asshole tendencies."
Instead of working hard to succeed with class, Wolfram, like so many other tedious and unremarkable people that pinch up some minor bit of fame for a while, is just excusing his own laziness.
This remind me Plan 9's "single most important feature"[1]:
> all mounted file servers export the same file-system-like interface, regardless of the implementation behind them. Some might correspond to local file systems, some to remote file systems accessed over a network, some to instances of system servers running in user space (like the window system or an alternate network stack), and some to kernel interfaces. To users and client programs, all these cases look alike.