Hacker News new | past | comments | ask | show | jobs | submit login
I invented the web. Here are three things we need to change to save it (theguardian.com)
385 points by perseusprime11 on March 12, 2017 | hide | past | favorite | 242 comments



Surprised that no one mentioned the new EU data protection directive, which goes a long way to fix the first issue mentioned in the article, the loss of control over our personal data (only for users in Europe though). I studied it in detail as I work in data analysis and consult companies on this, and I honestly think it is one of the best laws produced by the EU so far:

It gives users a multitude of rights such as being informed about exactly which kind of data a company has about them (and even get a digital copy of that data), how the company uses that data and for which purposes it is used. And if you're subjected to algorithmic decision making (e.g. an algorithm decides if the bank should award you a credit) you have the right to know which kind of algorithms were used in the process and to contest the decision. You also have the right to demand the deletion of your personal data and to revoke the right of a company to process it, as well as to demand correction of inaccurate data. The legislation also allows for severe fines and punishments for companies not respecting the regulation (up to 4 % of yearly turnover of the whole company group), so even companies the size of Google or Facebook should have strong incentives to follow the regulation.


You also have the right to demand the deletion of your personal data and to revoke the right of a company to process it, as well as to demand correction of inaccurate data.

This is probably the biggest change. Previously, at least in Europe, the emphasis has typically been on allowing people to know what data was being collected and to require correction of inaccuracies, but much less on whether it was actually allowed to be collected in the first place or allowing data subjects to require deletion of data.

I'm a little worried about whether the practical implications of this have been properly considered, which is something the EU has historically been quite bad at doing when it comes to technology and business laws. For example, under (65) in the regulation[1], which is primarily about rights to have personal data deleted and the "right to be forgotten", I see few provisions for a business keeping personal data that it legitimately collected with the subject's consent even if the cost of deleting it is prohibitive. An obvious example would be data that also exists in backups taken during the period of storing and processing that data, all of which would need to be updated in non-trivial ways to remove the data from them.

[1] http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:O...


Yes it surely creates some work for companies, but on the other hand what good would it be if you could just keep personal data around just because it happens to be on a backup tape? Companies simply have to implement suitable backup schedules, which will ensure that the data gets deleted within an acceptable time period (e.g. two weeks), which is doable without actively erasing any specific data (instead of simply erasing a whole backup version).


You and I have very different definitions of the word "simply".

Arranging proper backups at all is not something to take for granted when you're dealing with small businesses that have many other things to do, but obviously they're important for safeguarding the provision of products and services to all customers and it's important that any backups that are made are handled with proper regard to both security and integrity.

Requiring businesses to separate every tiny item of data that might ever be legally required from every tiny item of data that is collected and used with consent for reasonable purposes, just in case some customer one day decides to retrospectively withdraw their consent for some or all of that data, could easily become absurdly disproportionate. I hope it would go without saying that incentivizing businesses not to keep backups of all important data because of the compliance overheads is insane.

However, without such fine-grained separation, two weeks might be far too short a period to keep backups. To use my own businesses as an example here, we have accounting and reporting obligations that potentially require several years of data. The reporting information is typically derived once a year during reporting season, from straightforward records kept in the main databases and/or spreadsheets. However, we would probably have to completely restructure those records and denormalize all kinds of things in order to delete everything we don't strictly need for some legal purpose, which would be a huge amount of work.

That's just the structure of the original data. Then you have to consider things like deduplication in online backup services, where it's practically impossible to guarantee the complete destruction of all instances of certain data without destroying all backups that ever involved that data and starting over.

If this is the situation for small businesses that typically only collect a small amount of personal data in the first place and for obvious and necessary purposes, I shudder to think of the implications for organisations that actually process personal data as part of their main purpose rather than incidentally. I'm not sure it's reasonable to assume, in general, that it would even be possible to totally separate legally required data from everything else in such organisations, and there would surely be a lot of grey areas.

Now, please don't misunderstand me. I'm all for reasonable regulation to protect individuals from exploitation. I'm a privacy and civil liberties advocate, and I run my own businesses the way I hope others would run theirs, even if sometimes that means not doing things that would probably make us more money because they also make us feel uncomfortable. But there has to be a sensible balance, and the EU does not have a good track record of balancing its business regulations sensibly. (See also: EU VAT, cookie law, various provisions in the last round of consumer protection rules, etc.)


Companies need to separate personal data from other data as early as possible. Yes, this requires some work and rethinking of data structures, it is doable though. Why would it be impossible to do this in your opinion? Do you have a more concrete example that we can discuss?

In General, I think requiring that companies do not hold on to your personal data indefinitely is a pretty reasonable regulation. If there were exceptions e.g. for backup data it would provide a convenient loophole for companies to keep the data.

Also, if companies keep copies of their personal data lying around it increases the risk of the data being stolen or leaked into the public. We have seen that even for the largest companies it's impossible to avoid "losing" data once in a while, so making sure that this data contains the least amount of sensitive information possible is very reasonable. The regulation does not even assume that companies are malicious, it just assumes that sh*t happens and tries to mitigate potential damage to individuals.


Companies need to separate personal data from other data as early as possible. Yes, this requires some work and rethinking of data structures, it is doable though. Why would it be impossible to do this in your opinion?

I'm very wary of making that assumption, because so much data could potentially be personal data even if it's not obvious. Remember that the real criterion here is data that is or could be linked to an identified individual. With the kind of progress being made with data mining and analysis and the kind of processing power being devoted to those activities today, there are few safe assumptions any more about what becomes impersonal data just because it's been "aggregated" or "pseudonymised".

Let's consider a common example. Suppose a business operates a web site, and like most such businesses it keeps server logs. Those logs are useful for a wide variety of purposes and some of the data may remain useful for long periods, to allow analysis of things like how the site is being used or whether certain patterns are useful for detecting potential threats, or even to provide evidence that a customer did in fact use the services on the site during a certain period in the event of a dispute over charges.

In themselves, those logs probably don't inherently contain personal data. However, each record does have data such as IP addresses within it, which may be quite easy to link to a specific customer in practice and thus make everything in that record into personal data.

Now, suppose a customer who has been using that site for a while stops, and then files a notice to remove all personal data about them that the site operator isn't legally allowed to keep despite that notice. In order to comply with that request, must the site operator therefore delete all records based on the server logs, including any backups or derived data, to which that customer might be connected?

I can't immediately see why the site operator would be allowed to keep those records with a literal reading of the new rules. However, removing them would potentially undermine useful and reasonable business functions such as those mentioned above. Moreover, the cost of doing so might be substantial, and the adjustments required so the infrastructure used to process those logs can support this sort of retrospective editing might also be substantial.

In such a case, I think the balance would usually be too far towards the individual. The imposition on the site operator is great, both in the effort to comply with the request itself and in the damaging effects on reasonable business practices. The risk to the visitor of that potentially identifiable data being used for typical purposes in connection with server logs is low. Unless there are other relevant factors that point the other way (perhaps if the site deals with a particularly sensitive subject) the cost to the site operator is almost certainly disproportionate to the benefit to the individual.


IP addresses are an interesting example, as they're explicitly mentioned as personal data in the directive since in many contexts they're sufficiently unique to associate them with a given user.

Really would like to discuss this further, if you're interested feel free to send me a mail (discoverable via my profile)


1) How does this interact with backups? It would seem impossible to build a safe backup system that can comply with "delete all this user's data" faster than the expiration of the backup retention period. Recovering from backup could potentially reintroduce data about someone who requested deletion.

2) How specific is "for which purposes?" I can't imagine companies would ever go collect explicit opt-ins for every new SELECT statement in their codebases. It seems like the only thing to do is list the broadest possible set of purposes upfront.

3) How do you decide whose data it is? For example, HN comments can't be deleted because they're considered the internet's data, not yours.

What if I sync my contacts containing your phone number, or a photo of us together? If you demand the deletion of "your data" then do my contacts and photos disappear?

What if you make a scene at some establishment, or default on a loan? Can you demand that their record of "let's not do business with this person again" go away?


What if I sync my contacts containing your phone number, or a photo of us together? If you demand the deletion of "your data" then do my contacts and photos disappear?

This is one of the most tricky areas in deciding what is reasonable. Obviously there are advantages to being able to share personal data about other people for your own benefit. On the other hand, every time you do that, you are potentially giving someone personal data without the subject of that data's consent.

I don't think we fully understand the implications of modern technologies in this area yet. However, I suspect we'll be learning some lessons the hard way over the next few years, as the correlation and processing of that data starts to catch up with the volume of data that's been collected.

It's also possible that some of the people giving up that data about other people, or more likely the businesses that encourage individuals to do so, are going to come under a lot of scrutiny even in terms of compliance with existing law and regulations. For example, if you install a social network's app on your phone, and that app uploads your contact list complete with names and phone numbers to their database, then both you and the social network have obviously just compromised the privacy of everyone on your contacts list. That much is black and white, but if the social network concerned then uses that data for any purpose other than providing whatever services you are explicitly requesting in terms of the contact list you already had, then from a data protection point of view it becomes a lot more just black. I'm a little surprised that data protection regulators, particularly in Europe, have taken such a hands-off approach to this issue for as long as they have already.


Woah, didn't know it went this far. If anything, this would be great for facebook (which has been anything but transparent when it comes to what they do with your data when your account is deleted). Google already has some relatively transparent controls on what data they have on you, and allow you to wipe all or parts of it.


Max Schrems, an Austrian lawyer, sued Facebook Ireland over this and got them to release all his data. He also keeps a website where he explains how to get yours: http://europe-v-facebook.org/EN/Get_your_Data_/get_your_data...


I tried this back in 2012 after seeing Max profiled on Ars Technica. As someone who never had a FB account I was after my 'shadow profile' data.

Facebook emailed back stating:

"There isn't a Facebook account associated with the email address from which you are writing. This might be because you don't have a Facebook account or because you already deleted your account. In either of these cases, we do not hold any of your personal data."

I didn't ask for data attached to an email address, I asked for any personal data they had collected on _me_ and their answer was evasive and non-responsive. I didn't pursue it any further but I know they are bullshitting me, and I had to move on to other things.


But the law only works with companies.

The Internet as whole never forgets


The Internet also has to make money at some point, which means there's usually a corporate entity in the chain, one whose money (for better or worse) eventually touches Western banks.

So yes, the Internet never forgets (things it can make a buck off of).


Actually no. Saying the internet needs to make money is just as wrong as saying the roads need to make money. The net is just an infrastructure that has to be maintained and paiyd for. "Making money" is absolutely not a neccessety.


Ok, we are taxed for the roads and there are some roads that do require you to pay such as toll lanes. Think of Google and facebook as the restaurants and gas stations along the way.


Roads do need to make money. We just happen to have a larger concept of "make money" that encompasses increased commerce.

Mailboxes might be a better example of your point.

I'd argue that the current web looks more like roads than mailboxes though. We would have something more like Minitel if we'd gone the mailboxes route.


OTOH, it allows public authorities to bypass these regulations for "criminal purposes" and does not define any absolute safe haven rights from that.


Sounds quite similar to Personal Data Protection Act (PDPA) in Singapore:

http://www.pwc.com/sg/en/personal-data-protection.html


> You also have the right to demand the deletion of your personal data

How does it work with data you're legally required to keep? E.g. for tax reasons.


Excellent question! Article 17 of the regulation (http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:320...) lists some exceptions when the processor (the entity holding your data, e.g. Facebook) is allowed to keep some of your personal data. Most notably, sentence 3b states that data might be kept "for compliance with a legal obligation...", which includes keeping of tax records. Also, as the other poster already said the regulation does not apply in the same way to public institutions like the tax office.

However, the processor would not be able to keep any data beyond what is needed for the purpose for which the exception holds, and the data cannot be used for anything else.


"Dear Mr tax man. I request that you delete me."

Financial institutions and the government are exempt for "necessary information".


People often joke about the EU being toothless don't they? What penalties exist, as presumably their is a grace period to this that gives companies the time to build the systems in order to respect it? I feel like most small companies will find a way to skirt (or ignore) this as such systems could likely match the complexity of their products themselves..


Fines are only one pillar of the strategy, as the new regulation also makes it significantly easier for individuals to sue companies due to data misuse and loss in a civil court.

But you're right, in the end it will depend on how severely the individual EU countries as well as the commission persecute companies that don't obey the standards. And like in other areas there will be fraud and companies trying to circumvent the regulation. All in all I think the standard for data protection will increase significantly.


Small businesses don't need to build complex systems to respect the law. If someone wants their data they can ask for it by email. The business can then manually gather the data from their databases and send it as a ZIP, PDF or whatever.

You can worry about building a system when it makes financial sense compared to the time spent manually dealing with data requests.


ZIP/PDF would not qualify as an exchange format, as the directive clearly states the data must be "in a structured, commonly used and machine-readable format". XML would do, but a PDF definitely not as it's not machine-readable (in the sense that you can easily extract the structure of the raw data from the file).


This might be naive, but.. Shouldnt the internet be as far removed from regulation as possible? This just opens the doors for more regulation by central authorities.


Why should it be as far removed from regulation as possible? Why not strike a balance where we regulate certain things that are important to us, and leave open the rest?

More regulation is a good thing if things we as a society deem critical can only be accomplished through regulation.


We need to have a balance - definitely a fine line.


The new EU data protection directive is a huge win for users. I think it's going to be amazingly hard (and expensive) for enterprise and SMB to comply. The amount of work and liability to prepare for is massive.

For any smart consultants who are up on PII compliance and associated data security practices, there's an endless amount of work here.


The european data privacy laws have been largely ignored by the big US companies thus far, also because they were supported by the Irish gonvernment which did not really go after the endless number of complaints that were filed. In theory the new regulation should improve things, because it's no longer just a guideline as the previous solution.


If my business runs servers out of Canada that Europeans use, what are the enforcement mechanisms for this law?


European businesses will be punished with the sane terms as well if they use a service which is non eu compliant. So they can only use compliant services.


Ok. That kinda makes sense.

Does it flow through to subsidiaries or EU citizen owned foreign enterprises? Because it seems like this could lead to a regulatory race to the bottom.


It's not important where your servers are or if you are a European company, what matters is that you offer services to people in the EU (market place principle). I am not fully clear on how this will apply to foreign companies that do not advertise/target EU users though.


I would truly love a new "web" which is effectively style-free. I think the existing one is fine to continue for the general public, online shopping, social networks, etc.. but a parallel information-dense system that uses ultra light weight browsers that work on every device and platform, accessing machine-readable data with some standard stylesheet that concentrates in readability, and the "good" bits of the modern web, without the fluff.

It would be great for technical blogs and news, project sites, wiki type data stores, discussion forums, etc.

Maybe everything in this "new" web is static, no stylesheets except browser-side for users to customise themselves.

I'm not sure what the actual answer is but I know the existing web is broken beyond repair.


I work on Beaker Browser. Version 0.7 will have markdown sites. Here's what that looks like: https://twitter.com/pfrazee/status/840228255529590784

If you ask a layman to differentiate between "the browser" and Facebook, they might not know the difference. In a lot of ways, Facebook is a browser for a set of content types like posts, videos, business pages, etc. Same logic applies to say YouTube: it's a specialized "subbrowser" for videos. If pressed I think I'd say the only reason YouTube or Facebook aren't browsers is because they're not decentralized. They browse a fixed set of end points.

AMP project, kind of similar, it's a specialized sub browser for mobile articles. And it's like what you describe, actually: constrained, but not decentralized.

I don't think we need a new browser or web to accomplish your ask

Browsers could specialize by running "sub browser" overlay applications. If you want to a constrained subweb with markdown-only sites, then sure, why not? It's not that different than a markdown-only publishing service-- the YouTube of markdown-- except the app logic is on the client side, and it's browsing a decentralized web

our markdown website feature is built in, but it could be moved to userland as an overlay, as an app that's triggered when the user turns it on, or goes to a certain type of site. It may sound like a stretch, but out bigger picture is to invert the relationship with services to create thick client side apps, in which case an overlay app is just a particular class of app with a particular class of permissions


Thanks for the reminder to try out Beaker! It looks like a really cool project and I hope you find success with it.


Could Beaker be distributed as a browser extension for Firefox and Chrome?


I think this would constrain the vision of Beaker. I can't point to any concrete examples (like, where the extension API doesn't support something beaker does), but I can imagine wanting a less sandboxed environment when developing the platform for distributed web apps.


Sounds a bit like Opera Unite.


This type of "web" would run perfectly over encrypted networks as well since it would carry much less data. It would solve the problem of cluttered/fluffy web and insecure/easily-surveilled communications.

Being realistic though, this would only serve highly functional computer users seeking simplicity and privacy.

A lot of internet users view "beautiful" webpages as a display of trustworthiness and importance. And that's a very important trait if you are a business.


> A lot of internet users view "beautiful" webpages as a display of trustworthiness and importance.

Not so much. A lot of web users like simple, predictable pages. Flyout menus spazzing all over the page, giant headers breaking the space bar, and obscure hamburgers hiding useful features just confuse and annoy people who don't know about CSS element blocking.


Those aren't beautiful pages.

We geeks, who can understand the code, decry the complexity of a beautiful page and prefer not to go there. But we're 1% of the audience.

The other 99% are used to equating beauty and style with quality. More style == more quality. Less style == less quality.

The pages we love because of their simplicity don't do well in the mass market because of their simplicity.

I have taken to calling this the "Kardashian Problem" - I don't understand why the Kardashians exist. I am forced to accept that they do exist. Therefore I do not share a worldview with the people who pay money for the Kardashians to exist. The Kardashians are very wealthy, so there are a lot of those people. I am a taste minority, so building things that I like won't make money. I have to build things that the Kardashians would like. I don't understand what they would like, so I must test everything!


I know what you mean. Frankly I think the Kardashians are a force for evil. Promoting selfishness, superficiality and female independence through hyper sexual objectification.

Why are they so popular? Total mystery to me.


"selfishness, superficiality and female independence" One of these things is not like the others.


"female independence through hyper sexual objectification" is meant as a single element in that list, at least that's how I read it.


I hope this is a case of forgetting the Oxford comma.


I think we can be charitable here; I'm 99% sure most of us would agree with the intent of the OP.


I was hoping for an erroneous "in" ;)


The implication seems to be that female independence gained at the price of "hyper sexual objectification" is not as noble as that gained through e.g. hard work and responsibility?


I'm not the OP but I read the third item as being "female independence through hyper sexual objectification".


Maybe because it let's everybody feel superior over someone else.


Maybe I misunderstand what you mean by beauty.

> The other 99% are used to equating beauty and style with quality. More style == more quality. Less style == less quality.

I dunno. Try sitting down with an older relative and watching them try to use a heavily-styled page. There's a decent chance they won't think to click the hamburger. They may accidentally mouse over a flyover and get confused/annoyed when an unwanted menu covers the screen. Or they may try to use the flyover and fail when it retracts because they moved the pointer off the menu for a moment on the way to a nested submenu.

I admit the floating nav-bar breaking page down is a personal pet peeve, though. Seriously, folks. When I page down, I expect the text at the bottom of the screen to appear at a specific place near the top of the screen, and my eyes subconsciously jump there to continue reading. Instead, your nav-bar covers up some unread text, or changes size because "reasons," or you hooked page down and got the scroll distance wrong. You probably won't get it right, so please don't do it. If I want to "nav", I can go back to the top of the page. Or you can put the "nav" on the left, since vertical space is precious, and narrower text is easier to read.


Most of the really unskilled users struggle with the complexity of the average webpage. Ads that imitate contents, videos, pull-down menus... they get lost in it.


Remember that for the 99% of the population that aren't us, form > function.

They don't care if it's hard to use, as long as it looks good.

We're weird because we think form follows function.


The nicer the package, the worse the contents.


I think I get what you're saying, but after thinking a little about it I can only see the big problem of standardization.

How can a discussion forum signal that one message was sent in reply to other message? Or how can a wiki say that one word links to another article?

If you don't have a standard for all this (which means a standard for everything anyone could invent to run on top of this style-free web) then people will come up with their own way of doing it, and the browser should support different ways of doing it, thus people will start inventing new ways of signalizing that a message is a message, a user profile is a user profile, a link is a link etc.

Thus we end up with styles and the web as it is today.


So, basically, Gopher with user style-sheets?


And better search engines to find obscure documents on the servers of small, obscure colleges.


> I'm not sure what the actual answer is

I wouldn't say this is the actual answer, but it would go some way if browsers supported nicer default styling.

Businesses are always going to want their sites to look conspicuously "designed", because it signals to the user that it's a successful business that can afford to piss away money. But for the rest of us it would be nice if you could publish an unstyled website that looked a bit prettier / more modern than current unstyled HTML. Maybe with a modified doctype or meta tag or something to say "it's OK to render this with nice styling".


You can experience a better web. It's not easy though. You'll need to give up a lot. I turn off javascript in Firefox (must it really be done through about:config?).

A lot of websites work and I appreciate those sites even more. Gmail still offering a HTML version for example.

Then there are those that don't like Youtube. I've had to effectively give it up or find a workaround like youtube-dl and mpv.

What does suck the most is I can't use my banking site!

If you must have some JS in your life, Sunday is my cheat day. All the JS and all the calories you can handle, one day of the week!


I have JS off 99% of the time and I don't feel that I give up a lot! Rather the opposite:

Whenever I see something interesting here on HN or somewhere else and the site wants me to turn on JS, I realize that the subject may not be that interesting at all and just close the tab. Big time-saver! Win-win. :)


Exactly my approach. I only enable it for work (sometimes I actually need to do webdev) and web sites that I really need to use, like travel agencies.


I recommend NoScript(Firefox plugin) for blanket or selective JS removal from web pages.


Browsing with NoScript is fine if you just visit the same websites over and over. If you actually like to _surf_ the web, the experience is reduced to a never ending configuration nightmare. At least that's my experience. Nice in theory, but a terrible UX.


Seriously, I'm been using it for more than 5 years and I've only enabled it for local sites and everything is working.

It depends on what sites you browse. I don't do any social, use google or any other crap, so it's enabled to 0 internet sites with no problem.

If I need to activate it I have a shortkey (vimperator) that opens a new firefox profile (private and prepared to load crap) for the current site so I don't load any terrible UX in my default profile.

NoScript, ublock and SDC are a must for the current state of Internet. It's like drop the garbage out the window than in the garbage bin, you need to put care into it. :)


If I'd use something that required configuration, I think I'd use https://github.com/gorhill/uMatrix. I did use it for quite awhile, but ended up ditching it. Now I'm on uBlock Origin, HTTPS Everywhere and Self-Destructing Cookies. Although I might lose some control, it just works.


I do that. One discovers that some sites include a dozen of third party js. Often enabling the first party one is enough. Sometimes I have to enable some obvious js for displaying videos or comments. Many sites work well enough and display their content without all the related links, advertising, trackers etc. Some sites insist using js to display content that is already inside the main html file. Others break because their main script cannot handle the failure to load some other file. That used to happen when blocking Google analytics. The popularity of adblockers fixed that.

All considered it's an interesting developer experience and the sites I work on keep working even if some scripts don't load.


I don't see that as a problem, really.

Well behaving sites end up on your whitelist. Everything else gets tossed into the "yet another bloated crap page" bucket.


Can't your browser turn off JS on a per-domain basis? Just wondering.


Firefox doesn't seem to afford this beyond installing an extension. Somewhat annoyed that Firefox, with the reputation that it has for privacy and choice, doesn't offer a simple toggle in their settings panel.


Chrome can do the opposite, white list per-domain.


uMatrix does this well (if you are on Firefox)


We already have it. It's called HTML. Disable CSS and JS and you've got it! Standard UI only.


The uncomfortable truth is that even if style-free information-dense content would be better for you, it wouldn't be better for the advertisers that currently fund this content. And nobody is going to pay for this content out of pocket when they can pay for it though the tax of terrible UX and being reduced to a data product bought by businesses.

If the web is broken, it's not because of the technology, but the shaky economic foundation upon which it's built.


The web wasn't built on an economic foundation of ad revenue. The web was built on the economic foundation of both university-provided and personally/privately funded servers.

What ad revenue did was wildly expand the number of people trying to exploit the web for revenue, by generating sheer quantity regardless of content. I agree that that's a shaky foundation to base the web on.


I don't think you can just throw out everything produced by professional newspapers as useless content.


Just 90% of it.


You can still have adverts on style free sites:

    <a href="adverts.com?ad-id=12345><img src="adverts.com/ads/12345.jpg></a>
Then there's paid for promotional content, which is one of the current trends on the web at the moment anyway.

As long as information can be rendered then adverts can be included.


Trivial to defraud.


Only if you see the current "pay-per-click" model as inevitable. In fact, for a long time advertisers were very happy to just pay for a certain space on a newspaper page, without even knowing how many people would actually look at this page. Many advertisers still pay for space where they have no precise figure on how many people will ever see their ad (billboards, TV, radio).

Granted, I think such a world on the web would be difficult for all the tiny blogs that earn (usually very little) money through ad networks, where the advertiser has no idea on which page their ads will show up. But for larger sites, the "fraud" issue doesn't arise as much (because they have a reputation to lose) and you might also just pay for being visible there. The Deck is pretty succesful in their niche with this model and I am sure for brands like the New York Times it would also work well.


So are the current ad models. That's why ad networks have heuristics to detect click patterns and time spent on the advertisers page. The former is easy to replicate since it's all done server side anyway. The latter would require a less information dense page with a "read more" link to approximate what's currently done in JS et all. Not a perfect solution by any means, but it wouldn't be the worst thing in the world if the standard and banners were replaced with more tailored solutions like affiliate links (eg blogs about building tools would link to online stores. Blogs about software could link to online bookstores selling books on that software).

At the end of the day if someone wants to promote a product and someone else wants to make money promoting products then there will be an advertisement model that will spring up



I agree completely with the need of a new web. The bad information all across the web is gimmicky and flashy click bait is everywhere. Many people are becoming weary and desensitized to everything out there. We need to create a clean and simple, no bs web where information is clear and reliable.

Wikipedia I think is a small picture of what it could look like. Especially if better monitors are enforced.


> The bad information all across the web is gimmicky and flashy click bait is everywhere.

That's the price you pay for a web that isn't controlled entirely by governments or the media. The alternative is some service like Facebook determining what the truth is on your behalf, and you becoming nothing more than a pig feeding at their trough.

People cite disinformation, bad information and "fake news" as evidence that the web is broken, but it's not. That's evidence the web is working as intended - the problem is with the humanity it's reflecting.


>> The bad information all across the web is gimmicky and flashy click bait is everywhere. > That's the price you pay for a web that isn't controlled entirely by governments or the media.

My N=1 anectada seem to show the opposite:

More trustable correlates with no gimmicks and ads - Wikipedia, academic articles, expert's and organization's blogs, sometimes even HN.

Less trustable correlates with flashy - average online news, facebook & co.

(as I said, N=1, bare with me)


> People cite disinformation, bad information and "fake news" as evidence that the web is broken, but it's not. That's evidence the web is working as intended - the problem is with the humanity it's reflecting.

This should be on top.


I would truly love a new "web" which is effectively style-free ... a parallel information-dense system that uses ultra light weight browsers that work on every device and platform, accessing machine-readable data

That sounds a lot like the Semantic Web, another thing that Berners-Lee has promoted over the years but that has gained limited traction so far.


That what we already had/have.

How is that different from anything you can already do?


The difference is that the author doesn't realize nobody else wants this, or people would be styling their sites like that.


The rise of "readability" features appears to differ with your assessment.


Readability is driven by (a minority of) consumers. The fact of the matter is that if the producers want to deliver crap by default, they will deliver crap by default.

Creating a new thing for producers to use when they could just do that with their existing medium changes nothing.

It's like devising new receptacles for discarding unused apple cores so people don't throw them on the ground. It's not going to help. People don't litter apples because they have to.


There's already a proliferation of minimally styled sites and blogs like this one. There's also a proliferation of tools—particularly static site generators—that are meant for creating that kind of site. Finally, there's a proliferation of browsers that "work on every device and platform." (Admittedly, they're not lightweight, but they're not that bad if the sites are lightweight.)

Can you elaborate on what creating a parallel, information-dense system would achieve? I mean, I agree the web could be more efficient and lighter weight, but there's nothing stopping us from doing that on the web.


Static site generators are an interesting case, which can go either way.

Because the site is static, but the creator still wants to be interactive, you tend to get a lot of JavaScript dependencies that fire off a bunch of 3rd party requests instead.

Form handling, comments, user profiles and the like, along with the usual analytics and ads.

The resulting mess can be much heavier.


That's a good point, although there are many static site generator and theme combinations that don't have that problem, either by not being interactive or by limiting the interactivity to when the creator is editing locally.


Indeed. They can let you save on bytes, or let you drag megabytes.

I'm seeing both cultures, and hoping that the lighter wins.

But the dichotomy is a interesting statement on how the wider public thinks.


If you look at pages today and compare them to pages some years ago, it seems to me that because of usability best practices, they have become a lot more alike.

The step to a default style doesn't seem such a large one.

When enough people use Twitter Bootstrap, we could just support that natively in the browser.


Vannevar Bush had this kind of idea back in the 40s. I'd love it.


> Today marks 28 years since I submitted my original proposal for the worldwide web. I imagined the web as an open platform that would allow everyone, everywhere to share information, access opportunities, and collaborate across geographic and cultural boundaries.

Seems to me that the above and the points raised in point 2 sit on opposite sides of the spectrum. Either you get a free and open internet where everyone can publish content as they like or you police who and want can be published. The spread of misinformation seems to be a direct result of the democratic nature of the internet.


The spread of misinformation is better countered by educating people about the potential and existence of misinformation, rather than policing the web.

Just like children learning and growing wiser about everyday social deception etc., as opposed to policing everything people do and say offline.


Is this just an opinion, or do you have any data on this?


Opinion. I mean we tell kids to be wary of strangers, but we can't assign a policeman to every stranger.

Easier to teach people that they can be deceived or that bad things can happen, than to prevent any deception or bad thing from ever occurring.


How do you cope with the fact that a large slice of the population seeks out bad things and refuses good things?

The obvious examples are US news sources such as Fox News, Breitbart and Infowars [1]. However, it could also apply to fast food eating habits.

[1] http://uk.businessinsider.com/conservative-media-trump-drudg...


I don't know how to reconcile this.

On the one hand, I believe information and speech should be mostly unfiltered. If people want to spread information that denies the moon landing or climate change, or extols the existence of teapots revolving about the sun, by all means. I'd rather know who they are and allow them to publicly exercise their ignorance. I assume the public would help cause a correction of the zeitgeist.

On the other hand, the fact is that some people become deeply misinformed and do things against their own interests (e.g. voting for a candidate who promises to undo systems that benefit the voter), which can effect all of us. To spare us from going into the rabbit hole of a party politics, I'll just say I read a recent interview with a voter who wants to get rid of x, even though x in particular crucially provides them a life-saving benefit of y, which they want to keep. How does that even make sense? You delve deeper, and realize that some individuals just listen to the mantra of x being advertised as a terrible thing by certain media and public figures, which is easier to understand than to go into the details as to how specifically it's good and how it specifically could be improved upon.

I'd also add technology, like the internet and smartphones, is affecting our behavior far faster than we're aware and in some ways, I think we need to acknowledge our vulnerability. When media companies or groups of websites can cheaply spread misinformation, it is VERY HARD to combat it, because good information takes time to produce and interpret.


Your concerns are entirely fair, but this also isn't a new issue. Tabloid journalism has been a problem for as long as there have been tabloids. That phenomenon is well known to be highly influential around election times among the part of the population who read those tabloids, even though often that same part of the population will be hurt the most by the measures they are supporting but don't fully understand.


It seems to me that one crucial factor is what you might call "the destruction of the gatekeepers". In politics, you had systematic lying that was intended to deceive people, and lies led directly to Brexit in the UK and Trump's election in the USA.

There's a reasonably good account in "Donald Trump breaks the conservative media" http://uk.businessinsider.com/conservative-media-trump-drudg...

In the health field, we've seen systematic lying by the tobacco industry and the sugar industry, and quite a lot of deception (some no doubt sincere) in the food industry. The "gluten free" craze is one example.

There's a (possibly apocryphal) quote attributed to GK Chesterton that says "When men cease to believe in God, they don't believe in nothing but in anything."

When people cease to believe their governments, their doctors, their honest fact-checked newspapers and so on, they are easily exploited by snake-oil salesmen.


From that point of view, I'd suggest that the current problem isn't really the destruction of gatekeepers but rather swapping one set of mostly positive external influences (such as traditional journalists with strong professional ethics and critical reporting standards, and genuine expert commentators) for another much less positive set (such as online services that provide communications for the masses, but not in a neutral way, and media spin doctors as commentators).

You mentioned Brexit and Trump, which are case studies in the way people can be influenced by political campaigners, but I find a lot of the criticism of both of those results to be one-sided. After all, it's not as if the official Remain campaign in the UK or the Clinton campaign in the US were telling the truth, the whole truth and nothing but the truth either. However, there's less to be gained by fact-checking the losing side once the result is in.

Another recent example that I find more interesting is last week's UK budget. Much has been made of the announced rise in National Insurance rates for self-employed people. It's a controversial issue, because some people do exploit the tax system to pay less than they should by changing their employment status, but also because a lot of people who have never been self-employed themselves really don't understand how it works and tend to leap to conclusions that are objectively wrong. Sadly, rather than starting a potentially useful debate about different ways of working, different levels of risk/reward, and how the tax system should treat them, what has started is a discussion about how the party in power lied (because they gave a manifesto commitment before the last election not to raise the rate for this particular tax), and who can be made to fall on their sword this time to serve the entirely political purposes of who else.

In all of these cases, I think we would have been better off if we'd had a culture that fostered open debate and welcomed but looked critically at advice from those who might have more knowledge or understanding of any given subject. There are a lot of ways we could achieve that, but none of them involve communication channels that seek to influence which messages get through to promote a particular side of the debate. That threat is, in my view, even more serious than politicians who are blatantly lying, because we know some politicians lie a lot and can be sceptical accordingly, but without access to other information as well that scepticism might not make much difference anyway.


>their honest fact-checked newspapers

Of which newspapers you are talking? I don't think such a thing exists.


The New York Times, The Wall Street Journal, The Washington Post, The LA Times, the Guardian etc... plus AP and Reuters.

They all have trained journalists, sub-editors, and fact checkers. They all correct errors when they make them.


> How do you cope with the fact that a large slice of the population seeks out bad things and refuses good things?

I take comfort in the knowledge that, even if facts don't ultimately win out, human stupidity cannot affect anything beyond this planet.

Even if we end up causing global mass extinctions, life will flourish again in an eon or two.

Even if we develop some sort of planet buster and literally blast Earth to bits, the dust will reform into something else and there's an infinite number of other worlds out there anyway.


That's the very long view. There are probably better civilizations on other planets ;-)


Thank you for the common sense analogy. I have a 'hunch' that one could whip up a proof based on countably vs uncountably infinite mappings where the number of ways to deceive/offend outweigh the number of authorities which could police them, even assuming such ideals as perfect enforcement and willingness to submit to being policed (i.e. international uniformity of laws). But that's just a first-coffee-of-the-morning hunch.


It's hard to see how theorems about infinite sets can apply directly apply to human relationships.

Maybe you'll just make a heuristic argument showing that something grows as n² or 2ⁿ? For example, the number of possible relationships among n people grows as n². The world's population of about 7490000000 individuals would support about 28050049996255000000 potential relationships between individuals, or twice that many opinions that individuals have about one another.


My math prowess isn't up to the task but I'd love to see such a proof if it exists. Not because I don't believe it would support the point (I have the same 'hunch') but because the math geek in me would eat it up.


I also subscribe to the opinion that total avoidance of bad things is impossible, and that preparation for handling life's faults is a necessary part of the programming each person should have.


preparation for handling life's faults is a necessary part of the programming each person should have

That's a reasonable position, but sometimes no amount of preparation can give the little guy adequate protection against a big guy seeking to exploit him. That's why we have legal systems and regulatory frameworks in the first place, and providing a deterrent against big guys taking unfair advantage to reduce the number of times it happens and some sort of remedy for the little guy in the remaining cases is generally a good thing even if it's not a perfect solution.


I mean we tell kids to be wary of strangers, but we can't assign a policeman to every stranger.

Continuing this analogy - it actually turns out in practice that child abusers are almost never strangers, but someone known to the child, such as a teacher.


https://www.google.com/url?sa=t&source=web&rct=j&url=http://...

Its opinion, but there are definitely people out there who support this approach, such as Neil Postman. Its called crap detection.


The DMCA has incentivized everything we see our current corporate-dictated web environment by making none of these businesses liable for any of their actions. The spread of misinformation is a kind of emergent phenomenon.

The democratic nature of physical reality hasn't led to every publication turning into the National Enquirer.


In this context you might mean the CDA, which provides a different set of immunities.

Edit: also note that misinformation isn't necessarily defamatory and so even if online intermediaries were liable for content, there wouldn't necessarily be a legal remedy to suppress most "misinformation". For example, there's probably no aggrieved party who can sue to stop a hoax or urban legend about nonexistent persons.


If Facebook or Google were to be held liable for the content on their platforms they would not have the same business model that incentivizes outrageous and inflammatory nonsense. They wouldn't have business models that didn't directly attribute personal ownership over authored works.

This isn't about people being liable for spreading misinformation, this is about a business model that thrives on misinformation.


I believe there are two critical components of that that aren't simply natural result of the democratic nature of the internet, nor are necessary.

First, WWW is relatively young, so people haven't learned how to use it well.

Second, large platforms have been created to shift the democratic nature of the internet - make pseudo-open spaces that are not truly open, like various social networks, and give the users content that is a) tailored to them, so leads to less openness, less improvement and more mindless consumption and b) does it in an obfuscated way, via closed algorithms that prioritize you staying on the website to improving your life.

Those might be somewhat naturally occurring, but the degree to which they occur doesn't have to stay the same.


It's a very idealistic goal. But sure, with a free and open Internet (which I'd call anarchic rather than democratic) there's no way to police content. For one thing, there's no overall authority. And even if there were, policing content would be impossibly contentious and prone to injustice.

In my opinion, what's needed are privacy and freedom. And education for independent thought, rather than censorship.


> democratic nature of the internet

Democracy assumes 1 person has 1 vote. This is not the case here necessarily.


Democratic in the sense that it's more democratic than previous publishing platforms because the barrier to entry is lower.


If Tim Berners-Lee really wanted to save the web, he wouldn't support DRM in our web standards.[1] It's absolutely disgusting as well that they would argue that "fake news" is somehow a threat to the internet, provide no evidence whatsoever to explain why, and then link to a panel run by a media company considered untrustworthy by about half of American voters.[2]

[1] https://www.w3.org/blog/2017/02/on-eme-in-html5/

[2] http://www.rasmussenreports.com/public_content/politics/gene...


Agreed. He should have clearly opposed this EME garbage, instead of endorsing it.


Yes, and he wouldn't endorse WebAssembly either (which is just a different, and even worse avenue towards DRM).


Isn't WA rather easy to reverse engineer? The VM it runs in isn't overly complicated, and is open to the public.

Whereas EME is more, trust this native code please, and give it all the access it asks for?


WA makes it possible to replace entire browser runtimes with proprietary stuff. For example, you can compile WebKit to WA, with typography rendering and flow layout on a Web Canvas, and "Save as HTML" and other basic functionality disabled. I find it very irritating that TBL/W3C is promoting this and EME. I think W3C should be seen as what it is - a self-proclaimed standardization company acting in the interest of whoever pays the hefty membership bills struggling to keep relevant.


You can already do that with plain JS, though: see the Qt rendering engine ported to JS+canvas: http://vps2.etotheipiplusone.com:30176/redmine/projects/emsc...


But still, you can inspect that WA code, and WA doesn't have access to the DOM. Inspecting the EME packet is a crime in many jurisdictions, and designed to be difficult.

WA makes sense as what it is intended to do: give a memory efficient alternative to JavaScript in circumstances that warrant it.


What are circumstances that warrant it in a web browser? The things it makes possible can be had with native apps. What it does make possible is a new model for software and content sales. Which is precisely why I find it has no place in a web spec.


I wish you were right, but the market has decided that the web is the next cross-platform framework.

If this has to happen, let's make it work in a way that doesn't let the stupidest ideas survive. Using WA still requires JS to interact with the user. Which is far better than the monstrosities that JS is being forced to perform at the moment.

Some simple, relatively sane things that WA can do, but JS is a bad choice for:

* Client-side encryption/decryption

* Socket management


Could you elaborate? I've never heard of web assembly being related to DRM before.


I've replied above and also on the WA thread a couple days ago.

[1]: https://news.ycombinator.com/item?id=13793020


As much as I'd love to help projects like IPFS (for example), truth is that most people simply don't care and are entirely clueless on the impact of the continued centralization and surveillance of the Internet on their lives. Sitting with random people on a table, they giggle and smirk saying "I've got nothing to hide, you're too paranoid, bro, cheer up!" and I quickly give up. They have zero idea about how much info is collected about them. If tomorrow somebody pulled out that info in a fabricated trial against them, they'll sing another tune but it will be way too late. Nobody ever listens until it impacts them directly. Sad reality about Homo Sapiens. Another one is the echo chamber effect -- people absolutely LOVE their social echo chambers and they can legitimately punch in you the face if you point them at a source that disagrees with them.

As a second and last point to the above, I can't afford donating all my free time to help progress the decentralized internet anymore. I am 37 and I have a very happy personal life but need to work on my health a lot, I am very tired and burned out and I am finding myself unable (even if I want) to work for free without any reward in sight (not even talking about money; I am sure I wouldn't even be thanked). I imagine many others are in a similar position -- in terms of finances, in the health department, or in their general mental stance.

I very much like the idea of creating a "home internet box" which is a self-contained fanless machine connected to an UPS -- and it contains router, firewall, own website, own mailserver, own private Dropbox, a universal P2P node (BitTorrent / IPFS) etc., but as others have pointed out, our current stack of network technologies is way too bloated and full of incomplete standards -- which in turn are likely full of exploits and dark corners -- that right now the only seemingly appropriate course of action is to get rid of it all -- except the physical layer protocols -- and start over.

Try making an API app that works with anything else than HTTP and HTML/JSON. Tell me how that went for you. Try using ASN.1 as a data format, or a compressed secured IP layer protocol. Yes it's possible but it's much slower than it should be. Seems us humans always want to have one "universal truth".

It's extremely sad and I am afraid we'll live to see very oppresive times pretty soon.


@pdimitar - forgive the shameless plug but you might want to contribute to FreedomBox:

https://en.wikipedia.org/wiki/FreedomBox


truth is that most people simply don't care and are entirely clueless on the impact of the continued centralization and surveillance of the Internet on their lives

That's a common assumption, but I wonder how true it really is. I've certainly talked with friends in their 20s and maybe early 30s -- people who have grown up with the Internet and ubiquitous mobile devices -- and had them express a sentiment that was more frustration than ambivalence. Sometimes they did find it creepy that they'd be tracked around with ads, or that their phone was doing things based on where they were or what they had planned to do later. However, they've never known technology to work any other way and assume there's nothing they can do about it, and they value the social aspects of sharing stuff online so they keep using these services.

I very much like the idea of creating a "home internet box" which is a self-contained fanless machine connected to an UPS -- and it contains router, firewall, own website, own mailserver, own private Dropbox, a universal P2P node (BitTorrent / IPFS) etc., but as others have pointed out, our current stack of network technologies is way too bloated and full of incomplete standards

It used to be common that your ISP would provide you with an email address, web hosting, and so on as part of your package. Everyone could set up a basic web site by just FTPing an HTML file up to their ISP's server, and then yourname.yourisp.com would show it to everyone, or you could get your own domain name and use that instead. Likewise for sending and receiving mail. Many countries set up their legal/regulatory frameworks to foster competition between ISPs, and so in practice we had a relatively decentralised Internet. You obviously still had the equivalent of today's lock-in problem if you relied on the email or web address your ISP gave you rather than your own domain, but you didn't have to.

It doesn't really take having some magic box in everyone's home to provide this sort of flexibility, though such a box would be no bad thing IMHO. We just have to stop doing so much through a tiny number of centralised service providers and social networks, and develop standards for interoperability and federation. The whole Internet was built on those principles, so I'm pretty sure we could do it for sharing data like mail and photos, and there are many interesting possibilities in terms of searching for data as well.

One of the other provisions in the new EU rules that come into effect in 2018 is effectively a right to export data from one controller so it can be processed by another, so people could potentially migrate all the data they've given to sites like Facebook or Instagram or Twitter or GitHub to some other competing service (assuming such a service exists). It will be interesting to see how that one plays out and whether it is effective in breaking the lock-in effects that have allowed so few companies to become so dominant in recent years.


> However, they've never known technology to work any other way and assume there's nothing they can do about it, and they value the social aspects of sharing stuff online so they keep using these services.

I'd argue that how did the modern people end up indifferent to the growing centralization and surveillance is largely irrelevant. The sad result is still there. We all have anecdotal evidence and mine isn't more important than yours -- that's a fact. My point is that the result is still there and it's not changing for the better with time.

> It used to be common that your ISP would provide you with an email address, web hosting, and so on as part of your package.

But those required tech expertise in order to be utilized. It's my view that ISPs stopped offering these because they were expenses, and the services these expenses offered were barely if ever used than more of 1-2% of their customers. If we have a "magical box" at home it should definitely be much farther ahead in terms of user-friendliness; say, WordPress / Ghost / any-site-tech with a wizard-like Next->Next->Next cycle (with some checkboxes / theme previews along the way).

> We just have to stop doing so much through a tiny number of centralised service providers and social networks, and develop standards for interoperability and federation.

I want you to know that I am 100% on your side first. But honestly, using the "just" word for these mega challenges is slightly naive.

First of all, most people hate the thought of "scouring through the net" for their news or daily fix of meaningless updates. There's a very good reason why the social networks are a successful format and that's not only because of corporate interests -- people like having only one source, it makes it simple for them and they love it. You and I disagree, but we don't speak for humanity at large, and the humanity at large seems to love to have a narrower view.

Secondly, advertisement supports a large part of the internet. I don't believe for a second that a serious decentralization effort will not be SABOTAGED by ad providers (maybe even including Google). They'll most likely plant paid trolls and fake news writers and then start shouting: "LOOK! DECENTRALIZATION IS BAD! Come back to us at Google, we have AI-backed fact checking!" OK, let me put my tinfoil hat away. Even if that never happens (that's a stretch IMO) we still have thousands of ad companies who will do their damnest to make their centralized website customers (namely Facebook et al) even more appealing than before and try to make the decentralized services look behind with the times, non-trendy, slow, user-unfriendly or whatever -- so the teenagers and the young people would continue to flock to them. In short, there's a lot of economical inertia behind the centralization and it won't be easy to kill it because there's a lot of financial interest there and the people holding such amounts of capital historically have never given up their wealth sources peacefully.

Thirdly, standards for interoperability and federation are attempted for probably decades now. I am not an expert in the field -- not in these standards, and not in the ego wars in the OSS communities -- but it's my opinion the pissing contests in the OSS communities are a huge impediment. Have you taken a look at the KDE / GNOME wars years ago? It's as shameful piece of the human history as any genocide; I'd even dare saying it's much more shameful because there are no lives on the line, not even any money on the line, just some basement dweller's ego and nothing else.

If we're to be able to resist centralization and surveillance, us the people who are against it absolutely positively must forgo any ego and become very scientific; there's already a pretty good consensus about most of what a decentralized hosting service must do (reference: see IPFS; seriously, do it, it'll take you a long time but IMO you'll emerge even better informed than before) but when it gets to the details, people either start flaming each other, or a dictator of an OSS project decides they don't care what any random person thinks and just moves forward without any scrutiny or feedback consideration.

This must stop. The agents who benefit from the centralization and surveillance are without a doubt dying of laughter how us "the opponents" are much more busy fighting amongst each other instead of coming together as one and offering an open and ad-free alternative to their services.

Finally, laws, EU or otherwise, sorry to say it bluntly, don't amount for shit. History has proven that if a big player has deep enough pockets then they'll get things their way, laws or not. Let's not go there. I think deep down all of us know the laws target the citizens and not the companies, 99.9% of the time.


I suspect we agree on more than we disagree on here, at least as far as the principles go, even if I might lack your flair for the dramatic. :-)

That said, I really don't think developing standards for interoperability and federation is such a big deal in the grand scheme of things. After all, modern networking -- including the Internet -- is built on numerous such standards, carefully designed and documented, widely implemented and effective. If we can develop a stack of protocols for totally unrelated systems to talk to each other, from the lower levels of LANs up through things like TCP/IP and SSL to application level details like sending email between SMTP servers or requesting web pages using HTTP, surely we could standardise sharing content like messages and photos from friends without relying on some mysterious centralised service.

I don't know whether most people do prefer to have only one source for their information; I'd like to see more data before forming any strong view on that one. But let's assume you're right for the sake of argument. Is that a problem? We've had systems that collected and combined multiple streams of data for ease of reference for a long time, from the earliest days of e-mail lists with digests and Usenet newsgroups and RSS feeds up to modern Web-based aggregators like Reddit, the Facebook news feed, and indeed the site you're reading right now. Modern smartphones already combine even these feeds from multiple sources into a single stream of news and communications for ease of access. Is it really so far-fetched that we could cut out the middle-man in some of these cases and move back to a more peer-to-peer, decentralised system with neutral infrastructure?


> I don't know whether most people do prefer to have only one source for their information; I'd like to see more data before forming any strong view on that one.

I admit I have no scientific sources. This is just a gut feeling birthed from my numerous interactions with people during my whole life. I also happen to believe that not everything can be measured scientifically and there are a lot of things "all people know but almost nobody will admit in an official survey" and that "almost everyone prefers buying only one newspaper" is one of them -- but that would go wildly off-topic and I'll stop right here.

> Is that a problem?

Of course it is a problem. If we assume there are malevolent people who would want to suppress certain kinds of news (and with Trump, Kim Jong-un et. al. in power I don't think anyone can doubt the existence of these people anymore) it's much easier for them to bribe or coerce a single entity to censor things. For a true free speech, we need a fully anonymous but attack-resilient immutable decentralized network (sorry to keep repeating myself but... like IPFS). Good luck DDoS-ing or bribing/coercing that. It's made to be resistant against planting fake data, man-in-the-middle attacks, and DDoS -- right from the get go.

Admittedly IPFS isn't still there, for example it lacks automatic replication and the fact that there are already organizations whom you must pay so they "pin" (replicate) your content is telling me IPFS might eventually lose its credibility as well :( again though, off-topic. Sorry, you caught me in a very chatty mood.

>Is it really so far-fetched that we could cut out the middle-man in some of these cases and move back to a more peer-to-peer, decentralised system with neutral infrastructure?

In technical terms, it is not far-fetched at all. We're actually very close to it. Man, I'd love to work on that, only if that was my main source of income. I would pour a lot of energy and heart into such a work.

In economical and general reality terms however, it's almost impossible. As I mentioned, I am convinced there will be a lot of resistance from agents who would be negatively impacted in the pockets or their data collection. But you know what, if I am wrong, I'll be the happiest little panda.


> TCP/IP / SSL / SMTP / HTTP

Were all invented long ago and were all first of their kind. Creating a new standard when there are no wide spread alternatives is easy, doing the same where everyone is already invested is hard. E.g. that's why payment systems suck universally.


As someone who does a fair bit of work in networking fields, I think that's a pessimistic view. Most people don't see when the underlying infrastructure develops, in no small part because the standards and compatibility issues are so carefully considered, but that doesn't mean there aren't newer standards and protocols being developed all the time. We're now on something like the fifth mainstream WiFi standard, for example, and while someone buying a new laptop of getting a new box from their ISP might not know what all the 802.whatever markings mean, they still experience much faster speed and higher reliability compared to the earlier technology. An example from much further up the stack is that we're starting to see wider support for Web serving using HTTP/2, which is a big change from its predecessor.

Even with payment systems, we've seen multiple contactless payment technologies become established very rapidly in recent years, and developments like Chip-and-PIN cards a few years before that. Of course online payment processing is also a much more developed and competitive industry today than it was even five years ago, which again is partly because both the technical and the regulatory frameworks have opened up in recent years. SEPA in Europe is a good example here.


I like optimistic people (no sarcasm). But I think you're overestimating the cooperative abilities of Homo Sapiens. Every bank, store and pet garage invents their mobile payments nowadays. This is very bad and leads to segmentation of efforts which shouldn't exist in the first place.

Also, I'd argue HTTP/2 is not such a huge improvement as many make it out to be, but I can't deny it's some improvement compared to 1.0/1.1 -- that's a fact.


I don't know about HTTP, but the first three were definitely not first of their kind, particularly when you take into account how they operate now compared to their original incarnations.

GSSAPI and Kerberos, for example, both predate SSL (Kerberos by nearly a decade if I have my dates right). SMTP was originally intended only to transit mail protocols across networks, clearly evidencing that there were internal (incompatible) mail protocols before then. UUCP and FTP were commonly used before SMTP to transfer messages; it took over a decade after its invention for SMTP to finally see off UUCP, and a few more years for X.400 (invented more or less concurrently) to fade away as a potential competitor.


I think we can easily replace "were the first of their kind" with "were invented in times when there was a desperate need of a good protocol to do X, Y and Z" and you and the parent poster would be in a full agreement, don't you think?

I think his general point was -- even if there were some ugly corners of the technology, people were like "you know what, this is the N-th try and we really REALLY need this tech, let's move forward with it and fix the problem later". I think we all know how that usually ends, don't we?

It ends with a lot of legacy baggage and huge economic pressure to not change anything. So we come to our present dilemmas (outlined in the original post).


The letter:

http://webfoundation.org/2017/03/web-turns-28-letter/

Referenced by the W3C, but surprisingly without a direct hyperlink, only by title. A bit strange considering the organization:

https://www.w3.org/blog/2017/03/28th-birthday-of-the-web/


Does anyone else see as problem that web browsers are getting so feature-rich? That means that if anyone wants to write a new web browser he won't be able to.


It's called technical 'progress' - as technology progresses, you need bigger and more specialised teams of technicians. It reaches a point where huge entities like governments and corporates are the only groups with enough resources to compete.

It happens everywhere... a constant drive to improve by building on top of layers and layers of abstraction - to the point where even the experts give up and view it as, well, magic.


I don't see any other example.

I see technologies being replaced by something totally new from time to time.

What would be an example of this?


The most obvious example in the real world is those huge scientific projects like Cern's LHC, which requires the co-operation of multiple governments - no individual or team could attempt that. It's no surprise that the web came from there, since to create that at the time required a certain technical expertise.

If you want more software examples, how about operating systems? The good thing about software is there's a low barrier to entry, so you can have open source projects. That said, those open source experts often know their trade due to working as technicians for business or government, or relying on support materials.

The further down the road technology progresses, the less accessible it is for individuals to create.


Bloat is harshly progress.


>Does anyone else see as problem that web browsers are getting so feature-rich?

No. It would be a shame if software were forced to be only ever be simple enough for a single programmer to trivially reproduce. We would never have gotten past the terminal in that case.

> That means that if anyone wants to write a new web browser he won't be able to

Anyone can, they just need a lot of domain knowledge and time. They can also fork and edit existing open source browsers, or develop browsers which aren't quite so feature rich. But the complexity of the modern web is the result of generations of iteration on previous work, and of giving people a platform to express themselves in the way they want.

Most people want the feature-rich modern web, and that's impossible without browsers complex enough to deliver it.


Nothing that you said contradicts what I've said, so don't think you're the voice of reason here.

Of course it is good to have browsers supporting more and more features. I'm totally for "the web" against "the native" thing. I hope all "native apps" die in a fire and everything starts being written to run in browsers, as everything gets faster and better.

I just commented about a serious drawback of all this. I don't know what would be a solution, however, or if there's need for a solution.

Of course anyone can if they have domain knowledge and time. Anyone could have built a web browser 50,000 years ago if they had knowledge and time.


Does anyone else see as problem that web browsers are getting so feature-rich? That means that if anyone wants to write a new web browser he won't be able to.

Those who control the web browsers control the Internet. And web browsers have gotten so huge and hyper-complex that only wealthy corporations can afford to enter the market as new players.


Apart from the political rant that's exactly what I said, isn't it?

Or do you meant to say Google, Apple and Mozilla are working to increase the features of their browser in the last years with the objective of controlling the Internet in their minds?


Feature wise, I always think there isn't much between the big brand browsers without extensions. They could do so much more. They may be better at CSS, JS and have some good debug tools in them these days, but as actual web browsing tools, there's plenty of ground to cover. Bookmarking, History, Link management and even Meta inspections/overviews could be much more user friendly and useful. Example: Chrome is a tabbed browser, and as such titles are lost until you focus on tabs or select a tab. And that bit of meta information is really important. Publish dates, shouldn't necessarily be embeded in HTML body, when a browser could extract that information from a header or a head. Navigation could be more fluent and the browser could aid in that. So usability wise, I'd say they are pretty feature lacking.


From your example what I could understand is that browsers are not doing what you prefer, so you call them "feature lacking".

For the specific case of your example I prefer the current browser behavior. So what?


Perhaps it wasn't the best example, as the Tab/Window title could fall under Window management. But fine, if you just want minimal chrome, perhaps Chrome will do. Parent said feature-rich. My point really is that there are technical and usability features. And while of course there is cross over - I don't see much in the way of advancement in usability (without extensions). Browser's haven't changed that much in the last ten years - from an end users perspective. Window and web page management isn't great.


Yes, you're not the only one to see this problem, but it's not as commonly understood as it should be.

I've long desired for a formal semantics for CSS for example (but I believe flexbox is a step in the right direction in this regard).

If we're heading into a p2p future, my opinion is we're going to have to build it on web (HTML/CSS) technology for being able to leverage browser efforts, and for cross web/postweb publishing.

And I believe the prospect for doing this, from a pure technical PoV, is quite good. While I'm not completely agreeing with everything they do, WHATWG have succeeded in spec'ing a reasonably declarative and rich document language that doesn't need JavaScript for each and everything.

OTOH, more and more CSS ad-hoc syntax, and entirely new procedural web runtimes (cough WebAssembly cough) should be resisted.


I don't see this getting better. In fact, I think the next big trick in web browsers will be to embed a full-blown GUI widget toolkit, so you can run what would effectively be a Visual Basic 6.0-type program, directly in the browser, allowing you to bypass all the HTML5/CSS/JS layers we slather on top of whatever framework we're using to make it approach the same UX as a desktop app. It's going to take even more bandwidth to deliver this kind of app, but no one seems to care about that anyway.


I think you're dead wrong about this. Desktop developers have been thinking for decades all the web needs is a more-desktop-like environment. The general conclusion of web developers is that they like html, CSS and JavaScript and want them to improve, not disappear. Many people see them as the evolution of GUI development, not anything to be removed.


Only web developers that have never actually written desktop apps think that in my experience. The prevalence of hacks, frameworks and layers that try to make the web stack marginally less terrible suggest that no, they don't like html, css and javascript - they deal with them because they have no other choice and it's all they've ever known.

Just take SASS as an example. Why does it exist?


Because CSS is good enough to allow for some other language to be transpiled to it?


There'll be desktop-style GUI toolkits that render to webGL not long after webassembly starts seeing serious adoption, I'd guess.


Can pretty much do all that in a Java applet, and have been able to for a long time.


Not any more...

Meet the new boss: same as the old boss, but now you have to learn to deal with them all over again. :-(


Java applets are not supported anymore anywhere, I think.


Business using IE and Safari are still running them fine, and they're still supported in the latest Firefox ESR (though it will reportedly be the last). However, with the recent changes in Firefox, no "evergreen" browser now supports them.


I don't think that's a browser issue. More an authoring issue. You could serve up pages as text files, with as little html as a link.


There is a special circle of hell for people who write documentation on pages so javascript heavy that a lynx browser on a computer with broken graphics drivers can't load it.


At least writers of documentation for graphics drivers.


ugh, we're still using ExtJS 4.1 from 2011. I don't know what circle I'm in but it's definitely not pleasant. Backwards compatibility for IE6!!


It's interesting that one of the great things about the web is the promise of it distributed nature, and yet his prescribed 'fix' for misinformation sounds like the establishment of some sort of central authority to regulate content.


I'd have thought the best fix for misinfo, is having a good audit trail of links. References. And these must end at a source/s.


I thought he was saying exactly the opposite:

"We must push back against misinformation by encouraging gatekeepers such as Google and Facebook to continue their efforts to combat the problem, while avoiding the creation of any central bodies to decide what is “true” or not."


Sounds like doublespeak to me. Google and Facebook both took partisan positions in the past election. Why should we trust them as 'gatekeepers'?

And who gets to decide who the gatekeepers are?

An oligopoly made up of elite companies is no better than a monopoly, especially when the members are demonstrably partisan.

This whole idea that we need some sort of authority to act as the Ministry of Truth is misguided.


This whole idea that we need some sort of authority to act as the Ministry of Truth is misguided.

You seem to keep coming back to this point, but the quote I gave before literally advocates the opposite of creating such an authority.


The real test of whether one supports free speech is whether they still support it when voices they don't like get heard.

Kind of like the way to judge a person is how they treat those who are less fortunate, not how they treat their peers.


The interesting thing here is that there is no inherent right answer when it comes to balancing freedom of expression with a right to privacy. Both are desirable goals, yet it is inevitable that they will sometimes be in conflict, particularly with the capabilities that modern technologies offer for promoting or impairing either side.

Moreover, in the world today, even cultures with broadly similar values such as the US and much of Europe often take very different views on this particular issue, which makes finding a reasonable consensus for a system that spans these different places... challenging.


The new web needs to be distributed in a privacy-preserving sense. Today, you can't realistically browse the web without getting identified and – generally speaking – geolocated.

What we need is a model where you pull information you request from distributed and diverse pools of public domain content.


Yes we need more federated services. Email is a great example.

Facebook is a telecommunication medium, which means it should conform to the telecommunications act, which says that there should be a level playing field, and an open network.

I guess TBL should best start thinking about the protocols in such open systems.


What about matrix.org?


There's Freenet. It's just that nobody uses it.


Took a quick read, there is no way to publicly publish something without using other means to communicate the key and content ?


You can communicate the link over one of the multiple networks that work over Freenet, like FMS (Freenet Message System).


> distributed and diverse pools of public domain content

Like what Keybase is doing?


The current narrative of misinformation as a news item is a new thing, which arrived in our world during the recent US election process. The issue gets my suspicion-radar bleeping. The whole narrative smells funny to me.

It's not really a global issue, it's a current affairs issue and one particular to a specific geography. And its not really an internet issue I think but a human one.

What I find interesting is that Trump is adopting the narrative that emerged to criticise him, to criticise media bias in general. That's interesting because political bias and misinformation can be separated - actual wrong reporting of facts vs bias of interpretation, but they can be argued to produce the same effect.


My thoughts exactly. The issue of "fake news" being this global menace appeared seemingly out of nowhere despite the fact "fake news" has been a part of internet culture for pretty much its entire existence.


It seems a little disappointing because I have this fantasy that the goals and problems of the web are above the little hype tempests that happen to have dominated the news cycle for a few months. I'd like the face of the web to have a steadier vision, but I'm not sure how realistic that is


The web needs more anarchy, not less. Less spoon-feeding people with the truth, let them bear the brunt of their failures. All the problems he mentions are political, stemming from too much power in governments which makes political candidates ruthless.


Agreed. This idea that we have an issue with misinformation is ridiculous. The problem isn't the signal to noise ratio, it's learning how to find the signal in the noise.


I've been getting increasingly concerned about the future of the web as well https://arstechnica.co.uk/information-technology/2017/02/fut...


> "It’s too easy for misinformation to spread on the web"

It's too easy for misinformation to spread everywhere.


The marketplace won't sort out the security issues, any more than it sorted out unstable banks. Consumers lack the ability to obtain information, understand the issues, and make good decisions.

Computer systems should be regulated for safety, which includes confidentiality and integrity, like everything else.


Just as a counterpoint to your argument about unstable banks: It appears that the marketplace was sorting out the unstable banks, which is why they needed to be "bailed out" by the government.

If the government had not intervened, those banks would have been bankrupted, and rightfully so, because they essentially made a giant bet on the housing market and lost.

The government intervention in the case of the banks prevented a valuable feedback mechanism from taking place, whereby the "bad players" (as in, bad at the game --- at gambling) would have learned from their mistakes. So instead of the negative feedback of bankruptcy, they got the positive feedback of bailouts, and we should expect to see another financial crisis in the not-too-distant future.

What remains to be seen is whether the government itself has a working feedback mechanism for this situation. Will they bail out the banks again? in other words.


I think you have expressed here a worldview that is completely different than mine, especially the bit about "consumers."

What does "regulated" mean to you?


I'm not sure it's a matter of view; it's a matter of fact. Consumers do not have the ability (much less the time!) to understand everything. They can't reliable figure out which bank is stable, which drug will kill them and which will heal their particular illness, they can't figure out which electrical appliance is safe to use and which will electrocute their family, the fire risks of the various buildings they use, and I don't believe they can understand IT security much less evaluate the security of products.

No matter what our worldviews, consumers won't obtain more capability and time. I'm a technical professional and educated person, and I certainly don't have the time or resources to answer those questions, even the very last one.

> What does "regulated" mean to you?

I don't understand the question. In the case of IT security, I can think of many ways to do it: Liability for bad security, rules requiring good security , etc. I don't know enough about regulation to know what works and in what situations, but some minimal rules and liability sound good.


As commerce has become more mechanised, we've lost the ability to bargain and haggle in consumer business relationships. Forget about sacrosanct privacy rights, I can't even choose to pay to opt out of a lot of data collection. We need better options than all-or-nothing.


You are right, but this has nothing to do with bargain. I run a business that's totally outside the web. I don't bargain. If any customer is bargaining I just say no.


Except for the private data part, I found this not very constructive. Too political. IMHO biggest problem with the web itself is snooping, tracking all driving an insane amount of bloat which clogs the internet. Any extra bandwidth or horsepower is immediately sucked and then some up by advertisers and tracking. There's nothing lean and mean about it anymore.

The internet exists as an information resource that people need to be able to sift through themselves, not something that governments or other self selected groups decide to arbitrarily censor for whatever selfish reasons they have.


> "Political advertising online needs transparency and understanding"

As if that wasn't a problem outside the web. Defenders of democracies like to dream about "transparency and understanding".


It's not a new thing either, web or not.

Chomsky has been warning about the dangers of mass media turned propaganda machine since 1980's. Hell, George Orwell bitterly quipped that history ended in 1936 and everything since then has been propaganda. [0]

0: https://blogs.commons.georgetown.edu/engl-246-fall2011/2011/...


"Defenders of democracies" live in democracies. Defenders of dictatorships and monarchies, from Jeremy Corbyn (who had kind words for Castro) to neoreactionaries, also live in democracies, presumably because they're not actually stupid enough to believe what they say.

Democracy is a lesser evil, which many people born into it fail to appreciate because their imagination does not render the greater evils realistically.


>Defenders of dictatorships and monarchies, from Jeremy Corbyn (who had kind words for Castro)

What gives you the impression that Corbyn is against democracy? In fact, he participates in the 'democratic' system of his own party and of the UK. I personally refuse to participate at all in the bourgeois democracy, but that doesn't mean I'm against democracy as a principle.


I only said he had praise for Castro, a dictator whose soldiers shot people trying to escape his rule in thd back and drowned their boats. I didn't say he was against democracy, only that he said very nice things about a dictatorship that he sure isn't dumb enough to live under.


Is there anyway to prove you wrong? Or is what you just said the absolute immutable truth forever?

Because I have the impression that there's nothing I can say that will make you consider my arguments -- as I live in a democracy --, or to even think about them -- as you also live in a democracy, and as you seem to have accepted as a principle that "democracy is a lesser evil".


Of course you can prove me wrong or more precisely convince me I'm wrong (proofs are hard with these things.) Just show me enough monarchies or dictatorships fending off immigrants from established democracies instead of shooting would-be emigrants in their backs.

It also so happens that I'm dumb enough to have wasted time listening to arguments from neoreactionaries living in democracies and thinking about them. But mainly I think the test should be empirical.


Excellent comment, especially the last part.


For a start we need to get rid of this ad driven model. But this is not going to happen because people are addicted to free. Its like a drug.


Wikipedia, university pages, a lot of blogs and imageboards all work without advertising. It is not so much that people are “addicted to free”, it is that the showing of ads is so very easy and profitable. Site operators are to blame here.


Nope, just that quality content that's worthy of my $$ is mostly produced by bloggers that don't really ask for money. How many sites are there that are as good as LWN?

Ad network business is a big balloon soon to explode as the amount of actual customers you get is so little. Guess most advertisers know this but still publish ads because they're not too expensive to not do. When is the last time you willingly clicked an ad? I believe in the past ten years I only willingly clicked less than five ads.


Advertising isn't just about clicks/conversions. It's also about brand/product exposure.


If the exposure is an ad in a space provided by sth like the deck network, certainly. But otherwise it's not a positive exposure.


Do you really want a world without professional journalism? I know it's en vogue to hate the "mainstream" press, but a world without institutions that can be somewhat trusted will be open season for demagogues.


I don't really hate professional journalism, I hate news. I only read them because I have to. This aside, the actual, relevant problem is not what is en vogue to hate or not, but that the trustible institutions take the easy route and instead of producing nice subscription models and actual, valuable content with good, concise, precise wording and without clickbait posts; they just push sub-par stuff and ads to feed off of them. If there was a NYT or L'espresso for my country I'd subscribe.


I think a world in which all the trustworthy institutions hide all their content behind paywalls and so most people rely entirely on clickbait stuff from sources that don't have reputations to live up to is even worse...


So many "all"s used for a single sentence. They can have a free headline+summary feed, or a paid daily digest, or paid columns+digest and rest gratis. Monetising content is more honest, more predictable and better for the users' privacy and their bandwidth than ads, and I'm telling that I'd buy if they'd supply. At the end of the day, collecting, verifying, editing and publishing news is not free for the news agencies, it's not produced for free and need not be free at all.


So many "alls" (two!) because plenty of paid-for content already exists alongside the free media, and thus your principal complaint is that many news organizations have the temerity to cater to and profit from people that don't want to pay for news as well.

And if the objection is to people being misinformed by clickbait headlines and sensationalised takes on events in articles, I'm struggling to imagine anything that could be more inclined to worsen that situation than increasing the proportion of respectable news organisations that tell people they can pay a monthly fee to access the articles behind the headlines or bugger off and read Breitbart's take on it instead...


You can have free without ads if the consumers shoulder the distrubtion themselves. Decentralized p2p content distribution is the key. The more people consume want it the more will contribute resources.


It's refreshing to read Berners-Lee's proposed solutions as they point toward more technical and market-based approaches rather than the typical "we need more legislation to fix these issues" incantation.

I often hear many of the same people fighting "against government overreach in surveillance laws" (as Berners-Lee mentions) while at the same time advocating more legislation to govern information use/misuse on the web. I don't think it's realistic to expect government overreach to magically work where we want it and stop right where we don't.

Many of these problems aren't on the forefront of most people's minds (yet), but as the issues become more publicized and people begin to understand their importance, then we (as in "the people", not the government) will have a greater voice - and more importantly, power through informed choices - to make a difference.


I just wanted to add some positive comments here complaining the state of the web as we know it today vs. the people who are working on the solutions to the problems. There are well defined paradigms for building distributed systems. While much of the web was built in the belief these distributed systems would take root, lots of engineering went into client-server configurations. There are a ton of psychological reasons why these decisions were made. They don't have to keep being made though. We can all embrace distributed applications (some call them serverless applications) and free the web once again. Here's a lot of great projects that are trying to do just that: http://github.com/toadkicker/awesome-ethereum


It sounds like he's just trying to put the genie back in the bottle now. First he creates a system that gives everyone total freedom and now he's like, whoah, that's way too much freedom.


Tim Berners-Lee. Thank God, I was worried they had interviewed Al Gore


He is advocating for censorship.

Misinformation spreads everywhere not just the web. Who decides what is "misinformation"?

All speech and information is political, because man is a political creature. Who decides what is "political"?

His first point about losing control of our personal data is right on though.

Even so called "heroes of the web/freedom" are on the "fake news bandwagon".

What the hell have we come to when this is considered enlightening discourse.

We're all in deep shit and this is a taste of things to come this century.


He doesn't actually say censorship. He says "by encouraging gatekeepers such as Google and Facebook to continue their efforts to combat the problem." Just putting fake stories at 50th place in the search results say rather than #1 is not censorship.


Who decides what is fake?


Isn't Facebook the biggest culprit of web? Their walled garden approach and lack of social network portability is what I feel is killing web more than anything.


Facebook got a foothold, because authoring tools sucked so bad. That's ignoring the magic of some of their social tooling. But sharing digital content with your own intended audience could be achieved in a distributed manner. I know the W3C are trying to tackle that problem. We still need intuitive and easy publishing and aggregation tools that beat the Facebooks.


> But sharing digital content with your own intended audience could be achieved in a distributed manner.

We almost have the tools now:

* cheap or free blog hosting with easy markup and non-public posts

* self-hosted commenting with a URL field for commenters, enabling discoverability

* friendly RSS readers (I use NetNewsWire)

* password management built into most systems and/or browsers, to keep track of individual logins

There's work to be done to make it more user-friendly, but all the tools are there.


It's the user friendly bit that matters.


Publishing this in The Guardian, what beautiful irony


Berners-Lee seems to want to stay relevant, I mean he invented the web but does that give credibility to his announcments and concerns and predictions now? I mean we all know how the semantic web turned out.


Irony: can't view the page with an adblocker.


No issue with uMatrix/uBlock.


works with lynx


I thought Al Gore invented the internet


> Imagine that Big Brother scenario extended to the millions of smart devices such as digital thermostats and fire alarms feeding the Internet of Things ecosystem, and you have a problem that could eviscerate the privacy of billions of people, say security experts.

Is this anything but opportunistic scare-mongering?

"Spy agency own spy tools. Wouldn't it be scary if they used them on you?!?!?"


"Wouldn't it be scary if..." is different than being provably known to do/use/abuse.


I mean, is there any evidence the recent batch of WikiLeaks leaked CIA tools are being misused? As in, any at all?

As far as I'm aware, the only documented case of inappropriate tool use is overly broad selection criteria on legitimate pipelines of information (and related abuse of access to that data) -- the Snowden leaks. That was a doozy with serious constitutional implications.

But that's substantially different than "they're hackin muh TV!" just because the CIA developed the ability to as part of their mission to spy, which I dont believe we've seen evidence of indiscriminate use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: