Hacker News new | past | comments | ask | show | jobs | submit login
Why I do not want to work at Google (2011) (mail-archive.com)
176 points by luu on Nov 21, 2014 | hide | past | favorite | 112 comments



"and the evidence suggests that it is to that that we owe the collapse of oppressive regimes throughout the Middle East and Northern Africa"

Never mind that those oppressive regimes got replaced by...new oppressive regimes.

I beg to differ that is Internet what makes a regime fall in North Africa. What makes regimes fall is people who can't buy food going from 30% to 75% because of inflation of commodities(created by OUR WESTERN central banks).

Probably you think revolution is great, but I have been in Libya and Syria, before and in the wars and it is horrible. A civil war is the worst thing that could happen to a country, Americans could idealize it as they have forgotten what a real war is(fighting against non developed enemies 8000 miles away is not alike seeing your home in flames, your daughter raped or your brothers killed).

I would love to use decentralized tools, but they are so bad. They have lots of features, but they are incredible hard to use.

People will start using decentralized products when they are as easy to configure and install like a mac. Only centralized tools like facebook or Google provide easy of use that my grandpa could use.


I agree. Well said.

In 2011 the new oppressive regimes had not risen yet. Even today the one in Tunisia seems better than the old one. If you are interested in more detail in what I thought about the revolution in Egypt, http://canonical.org/~kragen/egypt-massacre-sotu.html goes into more detail from January 2011, just after #Jan25. The situation in in Egypt is terrible today.

Violent revolutions have a tendency to submerge their revolution in their violence, because violence leads to rule by the most effectively violent. Those are not the people whose rule I want to live under.

I tweeted about this in May 2011, a few months before the decentralization post we're discussing, when the Syrian opposition was still mostly nonviolent:

https://twitter.com/kragen/status/65789586269421568

"@shadihamid Well, it [violent rebellion] hasn't worked out so well for the movement to overthrow Gaddafi so far, has it? Lots more dead than Bahrain or Syria."


>"and the evidence suggests that it is to that that we owe the collapse of oppressive regimes throughout the Middle East and Northern Africa"

Yeah, this is BS. That was just a Western media "human interest" story.

For one, those heavily into blogging and twitter and the like in a place like Libya or Tunisia, or invariably more affluent than the rest and more westernized (including more friendly to western BS interests). So they make a good subject to showcase as "the voice of country X", when in reality they are nothing like that statistically speaking.

Second, the "middle east and nothern africa" had tons of revolutions and collapses of oppressive regimes, leftist movements, anti-colonial movements and what have you, all throughtout the 20th century without twitter and blogging.

Just because some place like Egypt is the first time a 20-something guy in S.F. heard of revolution in those countries, doesn't mean it's due to "twitter".


> Americans could idealize it as they have forgotten what a real war is(fighting against non developed enemies 8000 miles away is not alike seeing your home in flames, your daughter raped or your brothers killed).

Southerners remember what that was like: 150 years ago Yankees were raping and killing their way across the Confederacy. Young folks nowadays have forgotten, but there are plenty of folks still alive who heard about it from their elders. It really wasn't that long ago.


No. Echoes of memories from your great-grandparents' great-grandparents is not even slightly on the same side of the room (or even in the same building) as living through an actual war on your own turf, civil or otherwise. I spent many years in a few different warzones, and you cannot possibly imagine what life is like for those folks. There is nothing in Western civilization that can convey these realities of war -- not even our own brand of tyranny or the relatively brief and cursory exposure of the vast majority our soldiers to foreign warzones. In a way, this attitude is almost offensive -- no offense to you personally -- because there is inherent disregard for the huge number of human beings living in awful shitholes where the human animal reigns.


> 150 years ago [...] there are plenty of folks still alive who heard about it from their elders.

"Plenty"? Care to flesh out your math, here? I'm working under the assumption that newborn babies make horrible first-hand witnesses and (75 years later) equally horrible second-hand listeners, and that not that many of either are going to live to 75 before talking about their knowledge.

It's probably more precise to say: "A lot of people here know a story about it", the same way a lot of Americans know about the Mayflower.

> Southerners remember

I think you're subtly equivocating here. The "remembering" going on is not at all the same kind as, say, "New Yorkers Remember 9/11".


That's not really comparable. My grandfather told me about fighting in World War II, but in no way does that mean I know what it was like to be there.


The scale of centralization is a bit overwhelming. Check out some of the things Google is hoping to get in to: http://www.google.com/ideas/projects/


The problem is, all of those guys who were so free in the nineties failed to create any noteworthy online services so Facebook, Google, etc. took up the space.

I'm gonna get down voted for that, but maybe if there were less nerd-wars in FOSS community and more thinking about real users the Internet would look differently nowadays.


I don't think it's true that we failed to create any noteworthy online services.

We built the internet. Maybe you've heard of it. We also created TCP, UDP, IP, DNS, email, the Web, Usenet, IRC, Git, BitTorrent, Tor, Bitcoin, and Wikipedia. We killed AOL, CompuServe, Encyclopedia Britannica, Solaris, the Information Superhighway, and the Advanced Intelligent Network. We were only unable to do smartphones because the carriers ruthlessly shut us out, demanding insane amounts of control over handset software and using their regulatory capture of the FCC as leverage, until Apple forced the doors open for us — but on Apple's terms. And we built most of the software that runs Apple, Facebook, and Google, too.

But how can we build new services to replace Facebook and Google on a decentralized basis, like email and the Web? That's a problem with both technical aspects — how can you build a distributed full-text query processor that runs on the machines of volunteers? — and social/business aspects — how can the people who benefit from these services effectively collaborate to get them created and improved? (Kickstarter and the like show a very promising direction for this.)

Those are the problems I want to be working on, not how to persuade Google Drive users to entrust a well-intentioned but unaccountable central authority with all of their family photos.


I;m not diminishing achievements you mentioned and I'm very well aware of them. You must admit however that not all of them can be attributed to FOSS, and all of them are quite low-level compared to what average user needs to be able to use it effectively.

I just wonder if we need 10 major Linux distros none of which is easy to set up and use to an average user. (sorry, even Ubuntu is still not there)

I am a big believer in open web and it makes me cringe every time I see a new cool technology that uses closed protocols (or rather APIs) as it happens with IoT right now. But it's hard for me to attribute it to anything else than lack of leadership from FOSS community.


FOSS and decentralized systems are not the same thing, although they are in some sense analogous (I wrote an influential draft essay about this in 2006 at http://osdir.com/ml/culture.people.kragen.thinking/2006-07/m...), and they tend to reinforce each other. Email is a decentralized system, and that's still true even if everybody's server is running Exchange and Lotus Notes, which are proprietary.

I don't think Usenet, email, IRC, Tor, and the web are "quite low-level compared to what average user needs to be able to use it effectively." In fact, I am at a loss as to how your comment is potentially relevant to mine.


Ubuntu is pretty much there. The only reason Ubuntu can't do some things is because of the requirement of Flash and DRM, both of which are obsolete or overbearing technologies we should avoid.

>it's hard for me to attribute it to anything else than lack of leadership from FOSS community.

So when it's a consortium of multi-billion dollar tech companies with massive R&D budgets, political connections and marketing dollars, it's FOSS's fault. OK.


Hey, what I just said is - spend less time on bs and more on creating valuable tools to everyday users and world will be a better place. It's not about fault, it's about lack of impact. Imagine how cool place would this world be if Linux was major customer OS and open source languages would be major ones used to create customer software with open protocols. Call me idealist but I would totally sacrifice half of the Linux distros to achieve those goals... :)


It's a question of timing. We did have social networks in the 90s, but the devices at the edge weren't strong enough to make them compelling, particularly as regards photos.

If you rolled back to the 90s you'd also notice AOL were regarded as a giant invincible behemoth in a similar position to Facebook or Google today.


Maybe they didn't create any "noteworthy online services", but they created all the infrastructure they depend on.


> Apple wants to relegate websites to second-class status on their popular computers, and exercises viewpoint censorship on what “apps” they allow in their “app store”.

I don't remember it that way all. Does anyone else? From what I remember, there was a very heavy consumer demand for the App store, meanwhile Apple was telling everyone just to make web apps. They actively developed WebKit into a cutting-edge, standards-oriented, developer-friendly browser. I don't see how you could say they wanted to "relegate websites to second-class status".


Originally Apple did tell everyone just to make web apps. Renegade hackers made native apps for iOS anyway, creating consumer demand, and Apple relented. All this happened before I wrote that message in 2011. By 2011 webapps were already second-class. (Not as second-class as they are on Android, though!)


That seems to me now like it was just a ruse to buy them time. They went on to hamstrung the browser from attaining native performance and never added all the direct hardware features you could get at with apps - which is what they would have done if they really intended the browser to be an app platform.


Yeah that was when the Iphone was first released in 2007, continued for a while. But by 2011, the app store was already becoming a behemoth, a huge source of profits for Apple, an effective content-curating tool and a close Apple-unique marketplace that they could compare to the Playstore and say 'we've got more and better content, get an iOS device'.

In other words, by 2011 nobody at Apple was telling developers 'web apps instead of native please'.

Meanwhile, in the past years they did effectively cut things like WebGL on iOS because it was a threat to native app performance in the browser. (iOS 8 finally, long overdue, changed this, now that the app store is there to stay.) Or say blocking Nitro on anything but Safari, say Chrome, not a problem for regular websites, but a problem for JS heavy web apps.


Websites can't send notifications on iOS, for one. (They can on Mavericks and Yosemite.)


That's a design/ideology conflict and not something that can be fixed. On OS X, you can leave a webpage open, and it must be open in order for the notifications to be sent. On iOS, you cannot (terminated whenever there is memory pressure). Push notifications are just way beyond what a website should be able to do, and scheduled ones aren't really all that useful except for a handful of apps.


Web pages do not have to be open to send notifications on OS X.


> Google [will] delete your account with no recourse if you admit you’re only 10.

The US gov is mostly responsible here, not Google: http://en.wikipedia.org/wiki/Children%27s_Online_Privacy_Pro...


When Electric Minds was running under my administration, I had a policy that we would delete any account from someone we knew to be under 13, just because the requirements for dealing with the personal information of children under 13 were way too onerous and resource-intensive for the operation we were running. Our policy explicitly stated, "This is a consequence of the U.S. Children's Online Privacy Protection Act, and Electric Minds is not responsible for this law or its effects." (In other words, "This may be bullcrap, and we may even believe it's bullcrap, but we've got to follow it anyway, so life sucks like that.")


Keeping your account on somebody else's server is mostly responsible, because that makes it vulnerable to nonsense like this. Google is very much oriented toward everyone keeping their accounts on somebody else's servers, ideally Google's. That's what I object to.


I generally try to avoid starting sentences like this, but here goes:

No. Just no. 10 year olds probably don't have the resources to run their own hardware, insofar as 10 year olds don't earn money, can't own property (in a legal sense), and are not wholly responsible for their own actions. Therefore, a 10 year old having an account with a 3rd party unbeknownst to the parents is a security / child protection / legal nightmare.

Saying "keeping your account on somebody else's server is mostly responsible" is disingenuous because the 10 year old has to keep their account on somebody else's server.

Edit: clarity


I am disappointed in the extreme level of rudeness in your response. I note that it seems to be occasioned by your not having been able to understand my message, with the result that the person whose dishonesty you are attacking exists only inside your own head.

10-year-olds can keep their data on their own server as easily as they can keep their books in their own bedroom in their own house, sleep in their own bed, call their parents on their own cellphone, and edit videos on their own computer. While the 10-year-old may not have legal title to any real estate, it is not therefore necessary for them to live in an institution. They can live with their family.

Hopefully that clarifies my message for anyone else who may have misunderstood it.


Who's responsible for the actions and security of a minor and their property? Can a minor enter in to contracts for services to connect the physical infrastructure to the outside world? Can a minor hold insurance policies against loss or damage to 3rd parties? Should a minor be tasked with the responsibility of keeping their belongings and data safe?

> They can live with their family.

Or legal guardian. That's my point. Children probably do need protecting, so their guardians are tasked, legally, with those responsibilities. Therefore, ownership, security, consequences of actions, etc., aren't clear cut when it comes to children.


> 10 year olds probably don't have the resources to run their own hardware, insofar as 10 year olds don't earn money, can't own property (in a legal sense), and are not wholly responsible for their own actions.

10 year olds can own property, legally.


Additionally they may also legally earn money. How else would children appear in movies?

The particulars vary by state, but here's a quick glimpse as to the federal perspective: http://www.dol.gov/elaws/faq/esa/flsa/026.htm

And of course one may work for one's self (start a startup) at any age.


I see your objection as taking the concept of individual ownership a tad too literally.

First, there's the family (or guardian) unit. Unless you're advising every individual person on Earth to run their own individual computing infrastructure.

Kragen's argument isn't for an atomized Internet, but for a decentralized one. If that means that there are local hubs or clusters that a neighborhood of 10 y.o.s could access and use, it would be a vastly better world than one in which only a handful of giants provided equivalent services.

You're also showing an impressive lack of imagination in terms of what it is to have an online presence. A decentralized structure with a cache-and-forward, high-replication, mesh-net structure could well function from minimal personal investments as a point-of-entry. A smartphone, or even feature phone, could well be sufficient hardware to enter into the system. I'm seeing Android devices priced in the $20-40 range. There are very inexpensive servers which offer similar functionality.

There's another economic class which is comparable in means to your hypothetical 10 year old as well: the vast underclasses of the world, with annual incomes ranging from $200 - $2000/year. Even among these, access to minimal levels of technology is highly sought simply because it hugely reduces the costs, and provides tremendous values, from communications.

Outernet (featured a few times on HN) is one example of an information service which is aimed at this population, with concerns over cost, energy access, censorship, surveillance, and content quality. It's a broadcast mechanism, but with some sort of mesh-net response network, it could well turn into a bidirectional system (there's also investigation of making the service itself bidirection).


WTF how about keeping it on your fully decentralized family server, hosted "on premise" and backed up to your friends house?

I think that was the point in that article.


That act was passed by elected officials


And google pays for lobbying. Interesting that most bills are written by private companies. I wonder what bills google wrote and gave to these "elected" officials.

Google isn't paying millions in lobbying for nothing.


Bet you $100 that if Google lobbied about that bill, they lobbied against it.


The bill was enacted less than seven weeks after Google was founded, at a time when users couldn't sign in to Google at all. (The Senate had approved it less than a month after Google was founded.) It's somewhat hard to believe that they had a lobbyist working on the issue in the company's first month of existence.


Thank you for bringing some sanity to this bizarre discussion, Seth. I miss you.


Sorry, my post was bad. I did not mean google lobbied for that bill because that would be impossible. I meant it as a general statement. That many of these bills are written by private companies. Google definately works on legislation.


Thank you! To be more clear, I was not saying google lobbied for that bill. Though re-reading what I wrote, I admit can be confused for that.


What you say may be true, but I very much doubt that Google would lobby for a law that creates headaches for them. COPPA compliance is a major PITA. I doubt any business would lobby for such onerous requirements.


Hardly. Google could simply ask the children to fax in signed permission statements from their parents, as websites that actually make an effort to support child users do. Neopets had this figured out 15 years ago. Google chooses not to do that despite having the resources, and they are responsible for the consequences of that decision.


Given the article is from 2011, but did this:

> and the evidence suggests that it is to that that we owe the collapse of oppressive regimes throughout the Middle East and Northern Africa

actually happen? I mean to say: Sure, oppressive regimes collapsed, but to be replaced with what? Take the 'Arab Spring' countries Tunisia, Libya, Egypt, and Yemen - are they better off for having traded security and stability for (attempted) democracy.

> Google, of course, wants to solve these problems too. But it has a different, less-democratic approach in mind.

No company is a Democracy in the sense of dēmos 'the people' + kratia 'power, rule', where 'the people' are you, me and the next person. Maybe companies are democracies where 'the people' are the shareholders, at least to some extent, for some definitions of 'shareholder'.

Often we see "The Democratisation of x" bandied about as though it has to be a good thing, but I'm left wondering what the term actually means if the consequences are typically a tradeoff between security + stability vs. democracy.

From what I've seen the media has portrayed Twitter, Facebook, YouTube (etc.) as playing a role in the Arab Spring demonstrations, yet these are exactly the sort of 'unaccountable intermediary' you're railing against.

I think the reference to the Arab Spring without qualifying the consequences of the whole scenario from the vantage point of history is dishonest.

Democracy isn't synonymous with Security and Stability, and we should all probably have a long hard think about Security and Stability before we promote disruption.

(Edit: fixed a thing where I'd written the opposite of what I meant).


I wrote the article you're calling "dishonest" in 2011. The Syrian civil war was still mostly Assad bombing his citizens. It would not have been possible for me to "qualify[] the consequences of the whole scenario from the vantage point of history" in a document written before that history happened. I agree that Egypt, Libya, and Yemen (and also Syria and Bahrain) are worse off now than before the Arab Spring.

I agree that companies aren't democratic. I think that's a good reason not to hand our government to companies.

Historically I don't think there is a tradeoff between security and stability versus democracy. http://slatestarcodex.com/2013/10/20/the-anti-reactionary-fa... goes into some detail about the unstable, insecure history of undemocratic governments, specifically to rebut neoreactionaries who are calling for an end to democracy in order to return to an illusory imagined past of security and stability.

Democracies usually don't make very good decisions. They do seem to make many fewer catastrophic decisions than non-democracies, though.


Previous discussion from the last time this was posted: https://news.ycombinator.com/item?id=2933619


Much of that discussion was excellent, although some of it missed the point. I'll try to keep tabs on this thread and answer any questions people may have today.


My startup has been trying to make decentralized computing easy, but I'm just not sure there's a huge market for it. We took OpenStack and made it so you could connect nodes from anywhere on the net and rent them out, whether it was a server in a datacenter or even someone's desktop machine.

I'm just not sure there's a huge demand for it though; originally we were thinking that people would care about price, but it turns out most companies don't care that they're being fleeced by AWS. If they do care, they're probably using DO.

Anyway, here's a demo of everything working: http://youtu.be/998IYD_WomY It connects a bunch of desktop machines around the Bay Area on different networks (Comcast, Sonic.net) and blends them in with a "dataceter grade" cloud server.

The real problem with decentralization though is that the asymmetric nature of the first (last?) mile makes it really hard to do anything useful with individual nodes. If we all had symmetrical gigE FttH/FttP everything would be a lot more rosy, but so far that's really only been happening in a few places.


I'm not sure that's "decentralization" in the sense that I mean. A "decentralized" system, to me, is one where no single person or group of people can deny others access to it, for example by deleting their accounts or turning the system off. Decentralized systems allow distributed innovation more robustly than centralized or merely distributed systems, and so they tend to outcompete centralized and distributed systems when they clash economically.

I haven't watched your video, though.


Yes, it's only part of a solution. It's a decentralized platform, and what you're asking for are truly decentralized applications. The platform can potentially enable decentralized applications while letting you manage them in a centralized way.

I'm just not sure about your economics argument though; convenience usually trumps any kind of economic reason for using a distributed system over a centralized one. This is particularly so with most internet services because the cost for most things always approaches free. I mean, we're reading HN on a website, and not through alt.hackernews, right?


> Decentralized systems allow distributed innovation more robustly than centralized or merely distributed systems, and so they tend to outcompete centralized and distributed systems when they clash economically.

I don't think this is necessarily true. The Internet started out about as decentralized as it gets, but eventually the Internet coalesced around a few core services - first it was AOL and Myspace, now it's Google, Facebook, Twitter, Youtube, etc. And meanwhile, too many decentralized services have fallen by the wayside - anyone still using Diaspora?

Centralized systems offer efficiency and convenience - decentralized systems have more moving parts and thus more friction. This is a hurdle that decentralized systems have to overcome in order to become successful, and they rarely do, sadly.


(Full disclosure: I work for Google. My opinion does not necessarily represent my employer's opinion, yadi yada.)

This guy's rant is really about the need for decentralization. I fully agree with him. But I disagree that decentralization is incompatible with Google's primary business model (regardless if the Internet is decentralized, there will always be ways to do advertising.) In fact Google Wave (http://en.wikipedia.org/wiki/Google_Wave) was a fantastic and radical attempt at decentralizing common use cases such as email, instant messaging, social networking, etc. Unfortunately this project failed to gain traction due to various reasons.

To return to a more decentralized Internet, we need software and network protocols that let people easily host their mail and blog, publish their social pages, and share their vacation pictures, etc, without relying on the cloud but doing it via a device that runs at home. An ideal place to run this software would be your Internet router, as it really is a full-blown computer that is always on, always connected. And, as the router, it conveniently bypasses the issue of masquerading/NATing which is the one reason why non-technical people do not run server software more often. Another advantage is that uploading stuff to your Internet router (eg. sharing pictures) is much, much faster than uploading them to a cloud service (Wifi or Ethernet bandwidth can be 50x-1000x faster than the typical upload bandwidth of a home Internet connection.)

It should all work out of the box with zero configuration. That is the only way the idea can gain traction. Not everybody is a sysadmin, so your grandmother should be able to make it work. Want to enable your mailbox? Just tick the appropriate checkbox on your Internet router as easily as you would sign up for some mail provider. Want to follow the social lives of your friends? Your browser can render your custom Facebook-style wall by pulling posts and pictures feeds directly from your friends' Internet routers.

We already have most of the technologies needed to implement such decentralized features: HTTP with cross-origin resource sharing, SMTP, automatic registration of DNS names to make you discoverable on the net, OpenID authentication, etc. Maybe some other needed bits can be pulled from the Wave protocol.

But the sad thing is that I am not aware of any attempt to implement any of what I described above. Perhaps it is a vision too ahead of its time. Or perhaps it is because there are 2 important problems that are very hard to solve in a fully decentralized way: (1) search, and (2) spam. You can easily search for and find your friend's blog via Google web search because it has an index of the entire web, but how do you provide this level of quality if the search engine is your Internet router? As to spam, you can easily filter email if you are the Gmail team running sophisticated analysis on the billions of emails processed daily because the more data you have the easier you can classify it, but how do you implement this filtering quality on an Internet router that does not have access to such a data set?

Perhaps the solution is to run most of the services in a decentralized way (email, instant messaging, social networking, etc) while at the same time relying on a few central services for some features like search and spam filtering.

Edit: thanks for the pointers to FreedomBox and Sandstorm, I will look into them.


> In fact Google Wave was a fantastic and radical attempt at decentralizing common use cases such as email, instant messaging, social networking, etc. Unfortunately this project failed to gain traction due to various reasons.

That’s a pretty weak summary. The product didn’t “fail to gain traction”. It pretty much failed to work in a basic way, at all.

It was the broken technical design decisions and broken technical implementation of Google Wave that doomed it to death, not any problem on the users’ side.

Also, anyone who says that Google Wave was decentralized, as implemented, is kidding themselves. That was Google’s marketing hype (and possibly even their eventual goal), but in practice none of the parts that would allow any kind of server support outside of Google were ever actually delivered.

There was an incredible amount of hype for Wave from Google, and initially there was a great deal of excitement from users and developers outside of Google, when it was first announced and when the first users started on it. And then as they let more people in and as time passed, the technical infrastructure was completely unable to scale (either with number of users or with size of individual conversations), and it became apparent that the web client was a buggy mess that would only work with small conversations involving a small number of people, and the servers couldn’t handle rapid adoption.

The semi-technical marketing documents described how it would be open and federated, with an open protocol implementable by anyone, and would use fancy modern algorithms (operational transforms) to handle multi-party updates to documents in real time. As delivered though, the protocol was big proprietary binary blobs wrapped inside the incredibly verbose XML of Jabber/XMPP protocol (most of the features of Jabber were ignored; as far as I can tell it was only chosen as a wrapper to give the protocol an illusion of openness, rather than for any particular technical merits), and instead of doing any kind of fancy diffs or sophisticated operational transforms, instead the entire content of a message was re-sent to every listening client for every keystroke. The Wave web client software was a big ball of spaghetti Java/GWT code which was closed source (maybe they eventually opened it?), and so it was effectively impossible for someone outside of Google to make an interoperable server or client.


And yet...

https://github.com/processone/android-wave-client

Stale, but it is out there.


Did he say decentralization was incompatible with Google's primary business model? What I read was this:

> Google is not institutionally opposed to this;

I think he's exactly right. Google does not oppose decentralization, but at the same time, the entire raison d'etre of a company is to centralize something in the form of a profit center. As far as companies go, Google is pretty good, they are not the threat to traditional internet values, but neither can they be its savior.


The FreedomBox is one effort in that direction:

https://freedomboxfoundation.org/

I haven't followed their progress recently, but apparently they had a release back in March.


http://freedombone.uk.to/

You've not heard it because it's not making traction - the above link can run on a rpi and achieves most if not all of the goals the freedombox set out to.


Google alumnus Kenton Varda has https://sandstorm.io/


His rant isn't about decentralization.

It's about centralization's bad effects. If there was a way for the bad affects to disappear, there wouldn't be a reason to rant. All things being equal, people prefer centralized utilities like electricity, water, and gas.

Just as it's possible to decentralize the Internet, I don't see why it's impossible to fight the bad effects of centralization. If you see insoluble problems, please feel free to list them here.


There are actually a great many efforts in this regard. You mention Freedom box and Sandstorm, which are only two of them. I'm working on http://nymote.org and it'll be built using Unikernels (specifically, Mirage OS - http://openmirage.org). Targeting the home router is something we've been working on for some time but the work stretches from deep tech to human-computer interaction.

Most of this work is in academia for now but as soon as there are business models that work, we expect to see real deployments (beyond the hobbyists).


Git provides most of the infrastructure answers. I think the idea that someone would host a personal website in future is slightly backwards. Just clone their website's public repo and browse locally. This could be extended to social news feeds, where if you just put new content in your repo then other people that subscribe could see those updates mashed together into a single feed.

The problem here is the economic incentives to do it are low, but the technical barriers to solving this are genuinely getting lower and lower all the time, so at some point it will happen.


So to view your personal website, on my phone, I need to clone an entire Git repo, with the entire history of the entire site..

Some people just treat some technologies as a "this can fix anything" when really they're well suited to one task.

This is the sort of thinking that suggests using a Git based solution to sync binary files between computers - the thing even the biggest Git fanboy will admit it does poorly - handling of binary files - and someone wants to use it for that specific purpose.


You could make a shallow clone with just the objects you need for the front page. If we're taking Git literally here, you need to get the current head somehow, then fetch the commit object, then fetch the root directory object it points to, then fetch the index.html blob object it points to. If we're not using HTTP URLs to name the objects, and the objects in question are small, it would be pretty easy to configure the server to "optimistically" bundle the commit, the directory, and the blob into the response to the request for the head, so it could be a single RTT.

It might actually be more efficient to do a single-version shallow clone than to fetch all the relevant assets with separate HTTP requests. Git already supports shallow clones, and you can already use them to clone things like Github wikis, and now you can even push from them: https://stackoverflow.com/questions/6941889/is-git-clone-dep.... A --depth 1 clone of https://github.com/kragen/500lines is 7 megabytes, although the full history is only barely larger. My phone has 400+ megabytes of RAM.

More promisingly, though, Git blobs are identified by their hashes. That means that if both my home page and your home page use Twitter Bootstrap 2.3.2, your phone only needs to download it once to render both home pages, since it can see from the blob IDs in the Git directory that it already has the relevant blobs. (Assuming it hasn't been optimistically included in the initial message response!) It doesn't have to worry that one of us might be serving up a maliciously-backdoored version in order to steal users' login cookies on the other's site. By the same token, you can accept the blobs from anyone who has a copy, who might be closer to you on the network than the origin server. Especially if, like me, you're in Argentina with a 250ms ping time to the US.

(This assumes you're using something cleverer than the standard git clone protocol, which builds a thin pack including all the objects you lack.)

So git-cloning websites to view them could actually be faster than loading them over HTTP the way we do it now, because it eliminates a lot of the unnecessary security issues that we work around with bandwidth duplication and high latency.


You're kinda proving my point for me there.

The only upside you actually identified there, is a reduced number of requests, which is already possible by using HTTP Keep Alive and HTTP Pipelining.


I was just explaining why the downsides you were writing about weren't serious. I didn't devote much effort to explaining the upside. I guess that a lot of things that seem obvious to me aren't obvious to you, probably because you haven't spent much time thinking about them. I'll try to do better:

1. You can use 10MB of widely-used JavaScript and stock icons on your web page, and the browser will be able to see the page after downloading 100K instead of 10MB. HTTP keepalive and pipelining don't help at all with that.

2. There are no more broken links, so people can link to stuff that isn't hosted on their own server without fear that it will go away next year, or be redirected to more commercially remunerative content, such as linkspam. (This is conditional on them caching a copy on their own server, of course.) HTTP keepalive and pipelining don't help at all with that.

3. Resources can be cached close to you on the network without you having to trust the cache not to feed you corrupted versions of them. (You do have to trust the cache to not report you to the feds for reading an article about how to download Interstellar.) This makes a huge difference in page load time by reducing latency. Again, while HTTP keepalive and pipelining can reduce the multiple of latency, they can't reduce the latency below 2×RTT to the origin server, and practically speaking they're limited to several times that. This is especially important if you're trying to host your personal pages on a high-latency residential internet connection, and/or browsing from a high-latency country.

4. Making every website not just archivable but also forkable enables a new kind of lightweight collaboration. Well, not totally new — I mean, we're doing it now on GitHub and, in a different form, on Wikipedia.

5. Naming web pages by hash rather than by IP address or by a DNS name that maps to an IP address makes it easy to host them on dynamic IP addresses, including with transparent failover when one of them goes down for a while.


CDN's already solve #1 and #3

#2 is no different than mirroring content, e.g. how fireballed.org works, but over longer time.

#4 collaboration is about working on something together. Every visitor to your site is unlikely to need/want/care about collaborating on it.

#5 You're suggesting an SHA1 hash is easier to remember than a domain name? Seriously?


This "X is no different than Y/X already solves Y" conversation reminds me of thinking, around 1992, that clicking on a link to a page on another web site was "no different than" connecting my FTP client to a new FTP server, logging in, cding to some directory, and downloading and viewing a file from that directory. In some abstract sense, yes, they're the same thing; but in terms of human experience, they're very different.

You ask, "You're suggesting an SHA1 hash is easier to remember than a domain name? Seriously?" Consider that if I seem to be saying something that's obviously ridiculous, even to you, then maybe you've misunderstood what I was saying, as in this case. I'm going to give you the benefit of the doubt and assume that your lack of understanding is genuine, not feigned to give you an excuse to be rude, but you were rude anyway.


You suggested Git based web page hosting would allow for local mirrors of content so they are available regardless of the availability of the original site.

I told you that is already happening in practice, without the overhead of Git.

in your example, i would not agree those two things are the same. Clicking a link that goes to a http:// url and clicking a link that goes to the same file via ftp:// i would say are reasonably similar for the end user.

you're right, i didn't read into #5 enough. so for the purposes of "easier dynamic ip hosting" you're suggesting that browsing to a sha1 hash means it can come from any computer that has a copy of it?

but nobody is going to remember hashes. so you need to use dns or similar, so say "foo.com" has a GIT record, the value of which is an sha1 hash.. but then the client needs to find somewhere to get that from... so presumably you're suggesting some kind of peer to peer system for hosting?

and all of that is somehow better than the existing model, where you simply have a DNS server that has an api to update the IP address of an A record quickly?

You've also skipped over a big issue here.. this whole concept relies on a website being nothing more than static files.

It also effectively removes the ability for the author to have any idea at all about how many people view his site.

Git is reasonably good at what it does. It has some flaws but for versioning text based content, and fostering collaborative working amongst developers/authors, its a reasonably good solution.

As a replacement for HTTP web pages, caching proxies, CDNs, parts of DNS? not so much.


Good luck making it faster than rsync.


While Git itself might not be the right thing, the principles behind it may well be. You might like to check out Irmin, a library to build distributed storage systems.

http://openmirage.org/blog/introducing-irmin


Not necessarily git - see IPFS http://ipfs.io/


> I think the idea that someone would host a personal website in future is slightly backwards. Just clone their website's public repo and browse locally.

If you're cloning the public repo, that still requires someone to host the website -- instead now, they are hosting the whole history of the website (or not -- they could rewrite history and just have the most current version) -- and anyone who wants to view it has to pay the cost (bandwidth/latency) to clone the repo before they can view it.


I wrote about this a bit in 2005, although Git had just come out, so it wasn't clear that Git was going to rule while Codeville would be forgotten: https://www.mail-archive.com/kragen-tol@canonical.org/msg001...

There's still a social/economic question of who pays for storage — presumably someone needs to be able to say "This data is important to me; please don't trust the rest of the Web to keep a copy," or things that nobody happens to read for a few months will be lost. So there's still "hosting". But it might be more like what EBS or Gandi provides than what Amazon EC2 or Rackspace provides.

I think distributed content-named storage (like CCN, Tahoe-LAFS, Git, etc.) solves the "distributed immutable hypertext" problem, but there are still other problems to solve: how to make things mutable (each of these three has a somewhat ad-hoc and application-specific method for this), how to do asynchronous event notification, how to establish contact between previously disconnected parties (unsolicited messages), and so on.

Think about how you'd build applications like blog comments, or email, or Google Search, on top of such a distributed content-named blob store.


>Git provides most of the infrastructure answers. I think the idea that someone would host a personal website in future is slightly backwards. Just clone their website's public repo and browse locally.

Are you also assuming that Git will one day reach a point where everyday web users can use it? Because unless you're using software to abstract away the ways that one normally interacts with Git, I just don't see your average web user going anywhere near it.


At some point it will happen. Probably as FOSS because, as you say, the economic incentives are low. While the economic incentives are low mainstream implementation will probably also be low.

Also, the security consequences of such scenario could be an issue. Organisations with huge resources have a hard time with security, how's the average person going to compete with an organised adversary.

We move now into the realm of speculation.


we [would] need software and network protocols that let people easily host their mail and blog, publish their social pages, and share their vacation pictures, etc, without relying on the cloud but doing it via a device that runs at home.

Like Xanadu? http://en.wikipedia.org/wiki/Project_Xanadu

If so, how would this new initiative succeed where prior attempts failed?

Perhaps the feature set envisioned for Xanadu (e.g. two-way linking, transclusion) was just 54 years too early.


Sending mail the old fashioned way requires a complicated header verification process.Anybody that has tried to send mail from a PHD script knows this. There is a secret layer of trust that home routers are incompatible with!


Are you sure that "complicated" and "secret" don't just mean "I don't know how it works" and "I haven't bothered reading the documentation"? There really isn't much secret about it, and I do indeed send all my emails using a mail server running on my home router. Just because you haven't bothered to learn about something, doesn't mean it's somehow secret or unreasonably complicated.


Loo-ok we have an angry troll hier!! ;D

(also: WTH is a PHD script?)


My phone automatically fixes words, I was referring to PHP and sendmail.


Freedombox is a waste of time, this is what you want:

http://freedombone.uk.to/


One reason why companies would rather get fleeced by aws etc. is that hosting your own server is both troublesome and expensive. This isnt much of a problem if you live in the temperate region but it is a HUGE problem if you live near the equator. Temperator can easily rise up to 33 degrees centigrade and beyond and maintaining a server without round-the-clock cooling in an air-conditioned room is close to impossible(unless your server is a single raspberry pi). That's why people would rather pay for Iaas services, sometimes it is simply cheaper for someone else to host it.


A serious point here: warehouse scale computing beats p2p in a dsl (or even feasible fttp) environments. Essentially the model of the internet we have is broadcast with decoration from the upstream channel.

To change this services using upstream that make money for the pipe providers are required. Otherwise, no upstream.

Freeloading cannot work. Repeat CANNOT. Because you need fellas with highschool educations to look after the fibre and the boxes.

Downstream will remove the boxes and put them into warehouse scale facilities.

So - Haxxors, get yourselves to the internet of things if you wish to see democracy preserved.

Freedom is in peril.

Defend it with all your might.


You may be interested that one week before, I wrote a post exploring your point (you are correct!) in detail: https://www.mail-archive.com/kragen-tol@canonical.org/msg002...

Summarized as, "Peer-to-peer overlay networks are inefficient on ADSL networks. ADSL networks are almost twice as efficient as SDSL networks. Better alternatives require redesigning the physical layer."


Actually the game is lost already because cellular modems are being embedded in products directly, skipping any ability you have to firewall them, all in the name of convenience.

The fact is we rely too much on the cloud to connect two adjacent devices anyway. The whole notion that much of this data has to go upstream at all is the problem.

(Shameless plug) This is what led me to write this: http://montrealrampage.com/king-ludd-19-manifesto-for-a-frie...


Why do you think that for FTTP? I do see the problem with ADSL as it is currently implemented (as analyzed very well by kragen in the mail linked to in a sibling post to this), but why would the same problem apply to FTTP?

Also, I don't get how you get to freeloading - upstream bandwidth is as much part of the product that the customer is paying for as the downstream bandwidth is.


Downstream bandwidth is mostly delivered using caches. Upstream bandwidth is not delivered using caches. So to provide the backhaul for upstream bandwidth we will do deep reach into the datacentre <-> home and interconnect in the data centre. Therefore fttp will not provide p2p connectivity unless there is a need for local connectivity because deepreach and datacentre networking is rendered non viable by sheer traffic volume. But the traffic will need to be geographically localised and/or the services will have to provide the economic support for backhaul uplift.

I think we can probably do petabits on single mode fibre bundles now with the right boxes but the trick is the right boxes, which are spensive and take lots of electricity to run. So even if you can pony up for an FTTP roll out then the use cases are going to be constrained by the network architecture that is used to support it.

Now, for most people this is not a problem as the up channel is basically irrelevant in terms of bandwidth in the current consumer internet, but if you are the sort of person who imagines a decentralised and democratic consumer internet which is not mediated by massive supernational companies, or if you are the sort of person who is interested in where the value is captured in the industry value chain I think this does matter.

Sharing other peoples content will not fund a democratic internet even if governments fund and regulate fttp. We need a new class of applications that generate revenue sufficient to pay for the infrastructure that will facilitate them. If we get that then the infrastructure will also facilitate a shift away from the data centre (not it's elimination though as it will be the way that the current usecases are delivered)

If you have any ideas please write them up - before the industry dies!


I'm sorry, but I still don't get what you are trying to say.

For one, caches are not exactly located on customer premises, either, right? They tend to be located in places that have easy access to power and are well-connected with fibres, and so far transmission speeds seem to be mostly going up.

Also, well, yes, social networks tend to be geographically localised, so I would very much expect p2p application traffic to also be geographically localized!?

Widely consumed content that's distributed in a p2p fashion obviously is just as amenable to caching as widely consumed content that's distributed by a central service, if it's distributed through some kind of content-addressed network. If that's cheaper for the ISP, they could just put caching nodes into their data centers.

Finally, I completely don't get why you think my communication should "generate revenue". My telephone calls don't generate any revenue either, do they? I simply pay someone for moving my bits around, that should be sufficient motivation for them to take care of moving my bits around.


I think that caches work where you have a vast imbalance of requests for a particular bit of content - they seem to me to define centralization.

In terms of generate revenue - I mean provide a service that people are willing to pay for at a level that will fund a network that enables the use case and decentralization. Current services are tending to deep reach from the datacentre, because that's cheap. If all services that people are willing to pay for are suited to that model that's what everyone will get. To have a different infrastructure model you need a service that people will pay for that is not well served by deepreach from the datacentre.

It's not the moving of the bits that is at issue, it's the how the bits are moved, where and who by.


"I think that caches work where you have a vast imbalance of requests for a particular bit of content - they seem to me to define centralization."

Hu? Caches for geographical distribution are actually a form of decentralization! The problem with the existing ones is that they are mostly under centralized control, but there is no technical reason why that has to be, and an ISP's caching web proxy (which weren't that uncommon back in the day) is a simple example of caching infrastructure under decentralized control that could help an ISP reducing the load on their upstream connection for often-requested content. You don't need youtube to copy a video from your machine to the cache at some viewer's ISP, that's just the way it is commonly done right now because in that way youtube can make money - or, if you want to decentralize further, you could copy to a cache on the machine of some customer of said ISP in some geographic region: The more requests there are for some content, the more likely it is that there is a copy in your neighbourhood that doesn't need to be transferred from the original source.


Maybe if Google actually open-sourced as much software as they consume, that would offer a way out. Some people would still want to run the centralized as-a-service tools because it's on a huge professionally-run infrastructure and because of network effects. Others might want to run the same services locally, to support privacy/decentralization goals like the OP's. Google's main revenue stream (ads) shouldn't be affected, and they'd even benefit from outside contributions.

So why don't they do this? Because of the difference between "shouldn't" and "wouldn't" at the end of the last paragraph. What if code to support ads - or other invasions of privacy - were embedded into every other thing they might open source? Then the sheer difficulty of refactoring to "sanitize" everything else would probably create a sufficient barrier to opening it up, and revealing how all of that ad infrastructure really works might hurt their business in other ways.

When Google continues to run as the new AOL (closed source, walled garden) I don't think it's part of a deliberate philosophy. Nor is it an accident. It's because their continued uber-prosperity, if not their actual survival, depends on it.


I don't think it has to be that extreme. They do have a corporate culture of extreme secrecy, which I think comes from growing up in the shadow of Microsoft, Yahoo, and other much bigger companies. But I don't think their survival or even prosperity depends on secrecy any more. However, it's deeply embedded in their corporate culture, and I think it's actually spreading to other Silicon Valley companies.

There is an issue of incentives, but I don't think it's as strong as you think. I think it's just that running services makes them money, while releasing open-source software doesn't, so they devote lots of effort to running services, and comparatively little to releasing open-source software.

It would be very challenging for them to open-source "as much software as they consume." That would be a many-billions-of-dollars effort. But it wouldn't be necessary — the great thing about software is that you can copy it, so even if you only release a tiny fraction of the amount of software as you "consume", everyone still benefits.


Maybe tag with (2011)? It's possible some of the institutional issues have changed in the past couple years.


Well, they have changed, but they've changed for the worse. Google has gradually been succumbing to the pressures I identified in that post; witness the move of Android to put gradually more services into proprietary Google Play Services, the ban on disconnect.me, the years-long struggle over pseudonymity on Google+ (finally resolved for the best), the ugly debasement of search results to drive traffic to Google+, the ban on non-Chrome browsers accessing Hangouts, the abandonment of XMPP compatibility by deceptively pressuring Google Chat users to move to Hangouts instead, and so on.

These are by no means crimes, but they're more evil than the things Google would have done in 2008 or 2009, and they're certainly not moving the internet in the direction I want to see it move in.


> the ban on non-Chrome browsers accessing Hangouts

There is no such ban. I've been using Hangouts with FireFox and IE for months. If you attempt to access a Hangouts url with FireFox, for example, it prompts you to install a FireFox plugin.


Plugins are not websites.


The "Google Talk Plugin" is required for Chrome as well. At least on Linux, it's a separate install.


> the ban on non-Chrome browsers accessing Hangouts

What do you mean by this? The documentation lists support for Chrome, IE, Firefox, and Safari.

https://support.google.com/plus/answer/1216376


I haven't tried using Hangouts lately, so maybe it's been fixed. It got broken back in August; http://robert.ocallahan.org/2014/08/choose-firefox-now-or-la... explains:

"I can understand asking why Hangouts doesn't work in Firefox. In short, we wanted to transition to WebRTC sooner rather than later, and at the moment there are things holding us back on both our side (e.g. upgrading our ICE implementation) and the Firefox side (e.g. supporting multiple video streams).

"But overall, surely you're not arguing that transitioning a major Google application from a proprietary plugin to an open web standard somehow demonstrates that Chrome doesn't value web standards."


It doesn't look like it was fixed. I tried:

1. Go to http://www.google.com/hangouts/

2. Click "get hangouts"

3. Click "computers"

Result is "You'll need to download Chrome before installing the Hangouts Chrome extension. Do you want to download Chrome now?"

edit: I think i remember being told on twitter that this was a confusing UI that would be changed. But that was several months ago.


I think the "confusing UI" thing becomes apparent if you click and try it. By hangouts on "computers" they mean as a desktop-installed app (which is just a wrapper around a Chrome app). I don't know why you'd want that (basically just a bookmark on your desktop?) but that's what it is.

If, however, you just start a hangout inside of gmail, it works fine in Chrome, Firefox, Safari, and IE (just hung out with myself on two computers in multiple browsers to test that out) and there was never a time that was broken, contrary to the GP post.


Well, when you click "get hangouts" the options are "Android, iOS or Computers". So anyone that wants to run it on anything but a smartphone gets told that it can only run on Chrome. I don't see it mention anything about an app as opposed to running in a browser.

Perhaps this is just a confusing UI, but if so, it would be very easy to fix. If that's the case, I'm puzzled why it hasn't been fixed.


yeah, I had never visited that site until that discussion was making the rounds on twitter, so, no clue. I could see an argument for "this is a site where you can install hangouts as an app". There's nothing particularly wrong with using Chrome as a runtime (imagine if Mozilla's old Prism project had stuck around and people made "native" apps with it...requiring Firefox to run it would be just that: a requirement to run the app because that's how Prism worked), but if your motivation was to get users to use hangouts, I don't know why you wouldn't also mention that you probably already have it if you use gmail, so just open it up in there.

My only claim is that if you open up a chat window in gmail and hit the video button it works in other browsers and that using hangouts that way was never broken.


If so, I wonder why the site doesn't just say "to run Hangouts on a computer, just load a hangout in your browser" (possibly with a link to the right place). It seems such an easy thing to fix, and it was pointed out publicly, yet no change has been made.

It seems unlikely to me that the assumption is that people that pick "computer" over "smartphone" distinctly want a native app as opposed to just running hangouts in the easiest way possible. All they did was click on "get hangouts".


As I understood it Google updated Hangouts to use a PNaCl so that the needed code would be downloaded automatically instead of requiring users to install a plugin. Browsers that don't support PNaCl should have continued working with the legacy plugin. I'm not particularly familiar with the transition but it sounds like the unintentionally broke the older implementation. The Google+ team in particularly seems to iterate quickly so it's not particularly surprising things will break from time to time.


There is no way to get notifications if you're using Firefox or Safari.


A missing feature != ban. Plus I'm pretty sure you are wrong.

"This setting applies to all Hangouts notifications from Gmail, Google+ and the Chrome extension on the same desktop computer."

Gmail and Google+ are both supported in all major browsers.

https://support.google.com/hangouts/answer/3111923


Of those, the Google+ one is my favorite and I am glad Vic Gundotra got fired.


Bloody hell, talk about rose-colored glasses.

No, back in the nineties, you could not do any of the things he talks about. Not unless you were a high-status employee of a handful of major corporations or universities. Most people couldn't get online at all, and those who could were lucky to have client-only access to e-mail and the web.

Nowadays, anyone with a credit card and an Amazon account can set up an always-online server running pretty much any software they choose. The Internet is far more decentralized, by the criteria he invokes, then it was back then.


Your experience may have been limited, although I am not going to claim that mine was typical.

As I explained in the article, I did the things I talked about (running online services, including email and web pages, accessible from the entire internet) with a US$20/month dialup internet account on an ISP run by a group of MUDders who ran the ISP as a way to play MUDs, starting in 1997. I installed my first web server (from EIT's Webmaster's Starter Kit) temporarily on an IRIX workstation in a university computer lab, in 1994. I was an undergraduate student at UNM, a state university in the second-poorest state in the US, although I admit UNM was unusually progressive. The year before, I was an unpaid student system administration intern, with a user account on all the Suns at the math department, assigned tasks like "please get tvtwm to compile on SunOS 4.1.4; Prof X wants to use it".

I agree that it's pretty awesome that anyone who wants to pay Amazon or DigitalOcean US$5 a month (and can get access to a credit card they'll accept) can run any internet-facing server software they want. Unless they're WikiLeaks, say. Or infringing on an invalid Amazon software patent. Or just want to not have random employees have hardware access to their machine for reasons of security and privacy.

So I don't think the internet is far more decentralized. It's bigger, which is pretty great, and it's a lot cheaper, which is even better, but it's also more centralized.


He never said it was cheap and accessible, what he said was that it was decentralized. In other words, no one could stop you from doing any of those things, the fact that it was harder to get online is beside the point. The vast majority of services were running open protocols, and there were really no gatekeepers except for DNS I suppose, but even that is a decentralized protocol even if the registry is centralized by necessity.


Because Google makes its money by spying on its users.

It's also an advertising company, which means it brainwashes and deceives people in to buying garbage.

Both activities are unethical, and I wouldn't want anything to do with ether.


> My friends Len Sassaman (who committed suicide in the first few days of July), Bram Cohen, Jacob Appelbaum, and Zooko O’Whielacronx have made substantial contributions.

One of these things is not like the others. Three of these people have made significant contributions, in code. One of them has talked a lot and claimed the work of others as their own.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: