Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript is now required to sign in to Google (googleblog.com)
593 points by amaccuish on Oct 31, 2018 | hide | past | favorite | 499 comments



When I was at Google I started both the login risk analysis project and the Javascript-based bot detection framework they're now enforcing, so it's a pity to see so many angry comments. Maybe a bit of background will make it seem more reasonable.

Firstly, this isn't some weird ploy to boost ad revenue. This is the login page - users are typing in a long term stable identifier already! The Javascripts they are requiring here are designed to detect tools, not people. All mass account hijacking attacks rely on bots that either emulate or automate web browsers, and Google has a technology that has proven quite effective at detecting these tools. There's a little bit of info on how it works scattered around the internet, but none of the explanations are even remotely complete, and they're all years old now too. Suffice it to say: no JS = no bot signals.

Google had the ability to enforce a JS-required rule on login at least 6 years ago and never used it until now. Without a doubt, it's being enforced for the first time due to some large account hijacking attack that has proven impossible to stop any other way. After so many years of bending over backwards to keep support for JS blocking users alive, it's presumably now become the weakest link in the digital Maginot line surrounding their network.

For the people asking for the risk analysis to be disable-able: you used to be able to do that by enabling two-factor authentication. You can't roll back your account to just username and password security though: it's worth remembering that account hijacks are about more than just the immediate victim. When an account is taken over it's abused and that abuse creates more victims. Perhaps the account is used to send phishing mails to your contacts, or financial scams, or perhaps it's used to just do normal spamming which can - at enough volume - cause Gmail IPs to get blocked by third party spam filters, and user's emails to get bounced even if their own account is entirely secure. The risk analysis is mostly about securing people's accounts, but it's also about securing the products more generally too.


Sorry, but at this point it is pretty obvious that big tech companies care about account security only as far as it impact their services. The late revelation about Facebook abusing 2FA phone numbers for marketing is a great demonstration of how that works.

Google too does some really funny things to make it nearly impossible to create and maintain an anonymous accounts not tied to a phone number. Even when those accounts are just used for browsing and bookmarking, without sending any outgoing information.

>When an account is taken over it's abused and that abuse creates more victims.

Pushing JavaScript everywhere increases the attack surface for every single user on the web. Except it doesn't happen overnight, and big companies who do it (by affecting standards, or by doing stuff like this ^) aren't affected by client-side exploits and privacy loss.

If someone hijacks my browser through some clever JS API exploit and steals my credentials, what is Google's response? "Just use our 2FA." What about smaller websites that don't have resources to maintain 2FA? "They should authenticate through us." All roads seem to conveniently lead to centralization.

BTW, it is worth noting that the impact of a compromised account isn't nearly as significant if a single account doesn't hold keys to pretty much everything you do online. Somehow this is rarely factored in during such discussions.


> Google too does some really funny things to make it nearly impossible to create and maintain an anonymous accounts not tied to a phone number.

Well yes, these kind of accounts are highly susceptible to being bot accounts. What obligation does Google have to be the place for people's free, anonymous accounts? In any case, I haven't had a problem with the number of secondary accounts I've created that are tied to me only by another email address (which can point to another provider, like Yahoo).

> BTW, it is worth noting that the impact of a compromised account isn't nearly as significant if a single account doesn't hold keys to pretty much everything you do online. Somehow this is rarely factored in during such discussions.

How should this be factored into the current discussion? The use of JS is to ostensibly make it more difficult for automated hijacking to prey on users.


This is true. Google has explicitly never put the user first. We should be grateful to give them our information in the first place in the way they deem is best.


Sorry if I don't agree that giving Google a throwaway email address is a significant concession of personal information. Which free services do you recommend that do things better?


You can't give it a throwaway email address - they want your phone number now and will accept nothing less in my experience.

If that doesn't appear true for you, try over Tor and you'll see what happens to many people...


I am surprised to see a long thread about nothing. Google does force you to give them your phone number and they do not let you register new emails after you hit some limit.


I just tried creating a new account from a VPN, using a browser that I haven't used to log into Google previously. It allowed me to create a new account without a backup phone or email, and allowed me to send an email.

How does Google know you've hit a limit when you've registered new emails? And, not to belabor the point, but why should they be expected to give you unlimited number of free accounts -- or, what service do you recommend that will be that generous?


I mean they could accept my cash instead of offering a service paid for with your information.

Oh that’s right, they’ll never give the consumer power over their data because that’s google’s entire value proposition.


So, don't use google, and run your own DNS that returns 0.0.0.0 for all google tlds. I mean, I'm all about having some responsibility at the corporate level, and people working for companies showing ethics.

But this is just cringe worthy. How do you propose getting your cash to said company? I mean, most methods will leak personal information the company could then use anyway.


The moment you try to login from another location, they will lock you out and ask you to enter a phone number, which from that point forward will be tied to your account.

I am not 100% whether they use geolocation, or just trigger this when your new IP doesn't match the last IP you logged in with.


> Pushing JavaScript everywhere increases the attack surface for every single user on the web.

I understand where you're coming from, but most users browse the web with Javascript = on. Even as a NoScript user I have Google whitelisted because most of their services are unusable without Javascript. Even automated tools have good Javascript engines now thanks to headless mode in popular browsers.

I suspect the next steps in browser security will not be to blanket-deny scripting, but instead focus on containers and sandboxing to make script-based attacks less worthwhile.


I am not talking about merely enabling JavaScript. I am talking about normalizing more and more APIs accessible to every website I visit. Sound. Canvas. 3D. Local storage.


> "Just use our 2FA." What about smaller websites that don't have resources to maintain 2FA?

I'm not going to say that providing 2FA is "free" in the time sense (both in implementing it initially and supporting people who lock themselves out) but on the surface 2FA requires just a library to verify 2FA codes and a column in your users table to store the shared secret.


Yeah it's a bullshit argument. 2FA is a very cheap solution to a problem that could end up very expensive. If you can afford to (securely!) store account information and have a login infrastructure, 2FA is a minimal amount of effort to implement. You could add 2FA from scratch in less than 50 lines of code and one extra column in your account DB. There's no excuses.


>2FA is a very cheap solution

If you don't know the technologies the website is built upon or how much it will be impacted by increased barrier of entry for users, this statement is baseless.


Who uses malbolge to create servers?


I take offense at your tone, and the implied judgement. Malbolge was the right tool for us, and let us tap into a talent pool that was otherwise going unused.


The one thing I dislike about 2FA as a user is, if I drop my phone in a lake, can I safely recover my account? I have a lot of time, money, effort, etc invested in my accounts, and I really don't want to lose that


If you use something like Authy or 1Password to store your 2FA tokens then they aren't lost if you lose the phone. Does this mean a single point of failure and in someways undermine the use-case for 2FA? Sure but as with all things security you need to decide where on the spectrum you want to be. It's a game of trading security for usability base on what your situation requires.


You can write down on paper the private key you get when enabling 2FA. Some providers also give you a list of recovery keys.


If a website is smaller it may not be sticky enough for a user to feel they get enough value to put in the effort to do 2FA


That's totally fair, but you don't have to force 2FA on your users.


For good and bad, I happen to like social media for logins... Twitter, fb, google, ms all offer them. In the end if you get their "real name" and email address, I find that generally sufficient. You don't have to do the actual authentication, users can configure 2fa on their own.

I mean, some site do ask for way more than they need here and that's often bad. In the end, I think it's a reasonable trade off for end-use convenience. Which I often am.


Not to detract from your work there, but there's actually some great research papers about how Botguard itself is easy to bypass and google cookies provide most of the heavy lifting when it comes to bot detection.

I've snooped around a bit myself and it doesn't seem like botguard does anything much more advanced than other fingerprinting solutions.

I just don't buy that this is all about detecting more bots; every sophisticated bot I've seen in the ad world runs javascript as it's better to pose as a normal user, and only a tiny fraction of users would have javascript disabled.

To be honest, I also don't think requiring javascript is a bad thing either.


Ah, you're assuming it's the same strength on all places it's used - and also that it actually has been bypassed.

There didn't used to be any public bots that can beat the strongest version and from a quick Googling around I don't see that it's changed. Someone took apart a single program manually, years ago, but the programs are randomly generated and constantly evolve. So that's not sufficient to be able to bypass it automatically/repeatedly.

every sophisticated bot I've seen in the ad world runs javascript as it's better to pose as a normal user, and only a tiny fraction of users would have javascript disabled

It's a lot faster and more scalable to not automate a full web browser. Bot developers would rather not do it, they only do because they're forced to. Forced to ... by requiring Javascript, like this.


I understand this is not a subject where details can be shared, but - I'm sorry - at this level, this sounds like marketing speak. "You can't possibly comprehend just how advanced our AI is. If it appears stupid to you then because we intentionally want to have it appear stupid..."


Yes, I know. Nothing much that can be done about that, sorry.

The point I'm trying to get across is that companies use these techniques because they are effective - it isn't as simple as "some junk that was beaten ages ago" - and the collateral damage is very small, relative to other techniques. Far fewer users run with JS disabled than the number of users who struggle with CAPTCHAs.

We can see the direction things are going with reCAPTCHA v3, which appears to be the logical end of the path Google started walking 8 years ago - reCAPTCHA v3 is nothing but risk analysis of anti-automation signals.


It's a lot faster and more scalable to not automate a full web browser. Bot developers would rather not do it, they only do because they're forced to. Forced to ... by requiring Javascript, like this.

In other words, the bot developers are still getting through, and meanwhile it's the actual humans who don't want JS which get screwed. Reminds me of DRM... honest customers are the most inconvenienced, while crackers still break it.


> In other words, the bot developers are still getting through

There's no perfect solution but any solution is still better than none. Why do you keep a lock on your door if I can break it in 30 seconds? Your computer is even there! I can easily add a key logger there, why do you have a password then if all I need is to do that? You aren't stopping me thus any protection you add is meaningless.

Let say that having a full web browser takes 50% more resources (if you block javascript, you probably already use the argument that it use 99% of your phone battery so you can agree that 50% is pretty conservative), than you just blocked 50% of the tentative JUST by requiring it. That's seems pretty effective already and you haven't done much yet.

Now add all the information that you can gather using Javascript. Aren't you also blocking Javascript because it's capacity to fingerprint you? Again another easy gain you can get.

> honest customers are the most inconvenienced

A tiny fraction of the honest customers are inconvenienced, a huge portion of them allow Javascript. They probably inconvenienced more by blocking older versions of TLS.


You are putting words he didn’t say. JS provides vast reactive surface to identify automated tools with.


Javascript allows them to identify bots who automate a full web browser.


Javascript ist also required for the vast majority of web-based exploits. I find it somewhat strange that you ask me to make my system less secure so you can better secure my account.


Like the blog post mentioned, 99.9% of users already have JS enabled, and this number is only going to go up as websites rely more and more on JS. For them, this is a purely beneficial change, with no downsides. It's somewhat selfish for you to ask that your system be made more secure, even at the cost of security for 99.9% of other users.


I'm hijacking this thread to say we need a better ID system for the web! That preferably work without JS. Something built into browsers, that also allow you to create as many identities you want. When a id-signup header is detected, the user see a signup button, and can chose what information is sent to the web site/app. The user can login to any site with the push of a button, or even automatically. With a built in public-private key ID solution your friends will have the same ID-public-key on both site A and site B. The contact list can even be inside the browser, and web site's can ask for it, allowing for example white-listing in messenger apps, or let the user pick who are allowed to see their family pictures, etc. And web sites/app no longer have to store, username/password/keys, they only have to make a "challenge" where the browser automatically proves the ID. The private key should be exportable and standardized, and it should be possible to also use smart-cards and second factor logins. Having ID built into the browser means every site/app no longer have to build and manage all this functionality independently.


In a certain sense this existed with the `<keygen>` element (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ke...) which is unfortunately deprecated. But you still have all the same issues of moving and safeguarding key material.

These days with browser/mobile sync, maybe it's actually possible. But like a synced password manager, it makes a primary account breach that much more devastating.


The problem with certificates is they require a central authority, eg. politics. We need something that can work anywhere, without any third party or central authority.


There's nothing about <keygen> specifically that requires a central signing authority. You could just as well generate a public/private pair, feed the website your public key, and sign challenges with the privkey to log in.


You can also create a public-private key pair using JavaScript. Only problem is that when you go to another site, it wont let you identify yourself using the public key you generated on another site. And the browser won't answer the challenge automatically. Lets say we make a browser plugin, as a proof of concept. Then when it has "product market fit" browsers can make it a built in functionality. It will however be hard to ask a user to install a plugin, just to signup/identify to your site/app.


FWIW, this is something U2F somewhat solves. There is still the following exhaustive list of problems:

1. Education for and purchase of U2F keys 2. Key loss recovery mechanism 3. Key stolen defense (you can't just rely on the U2F key alone, there must be a pw or other type of second factor) 4. Widespread browser & device support (without it, a user/pass is required as a backstop)

Nevertheless, it is progress.


A problem with U2F and prior solutions is that they are hard to implement and public/private key pairs are specific to website origin, meaning I can not use the public key as an identifier. We need something much more simple, basically all it needs to do is to proof the possession of the private key. It would however have the same problems, those you mentioned, which are difficult problems. My idea in order to make those problems less painful is to require key rotation at regular intervals, that way people will automate away those pain points. For example by generating two backups keys that the user is instructed to store off-line, which is then used every second month to rate keys, and can be used as proof of you owning a lost key. Then you can implement certificates etc independently eg proof that the person holding the key is a certain person. But it's important that such things are not in the spec - or it would be too complicated, leading to no- or slow adaption.


This is brilliant! Has the IETF put out any proposals for a standard of this sort or is this just an idea you had?


Unsure if sarcasm, but OpenID has been a thing for over ten years, and the only place it's ever gained significant traction is with Facebook and Google as the providers: https://en.wikipedia.org/wiki/OpenID#History

Other than StackExchange, I can think of no other major site who looked at the over-engineered protocol, the implementation headaches, the confusing user experience (nascar board of provider logos), and said "yes, that is definitely what we want to have rather than a validated user email address."


Chrome supports U2F and that is basically a client cert based ID. But yes, the browser should be able to manage identities on a per domain basis.


> The contact list can even be inside the browser, and web site's can ask for it, allowing for example white-listing in messenger apps, or let the user pick who are allowed to see their family pictures, etc.

How would that work for sites like Hacker news, reddit, and twitter? Does every single user have to preview and approve every single other user on the website? That doesn't scale at all.


It would work the same, except you can sign up with one click, login with one click, and the server wouldn't store any password/key. But it also allows for additional functionality, for example in a photo sharing app/site you could tell it that only those with these ID's should be able to see the picture, without the site/app needing to know who those people are, they don't even need to have an account on that site.


This is basically U2F


One of the reasons that the percentage of users who have JS enabled continually goes up is because web developers make their sites non-functional when JS is disabled.


An error only becomes a mistake when you refuse to correct it.


> the vast majority of web-based exploits

Do you have a source for that? As far as I know, most web based exploit came from any external plugin such as flash, pdf, videos, etc...


Then just turn Javascript on to log in, then turn it back off again. You just need Javascript for the sign-on page.


Then just get your wallet out to use the ATM in the dodgy neighbourhood and put it away again. You just need your wallet for the ATM in the dodgy neighbourhood.


How else are you supposed to get to your money?


You go elsewhere.


> Javascript ist also required for the vast majority of web-based exploits.

So are browsers...


Interesting to hear from someone involved.

> This is the login page - users are typing in a long term stable identifier already!

There are so many other considerations at work here though, and I can't imagine that they're not obvious to you as well? For starters, we're creatures of convenience, and this makes it significantly inconvenient to block google scripts on other websites even when not signed in. It also guarantees that you have the chance to produce a (likely unique) JS-based fingerprint of every google user that can then be used for correlation and de-anonymization of other data.

But really the most basic point that probably makes folks here suspicious: if this were really only about preventing malicious login attempts by bots, then why not give users a clear, explicitly stated choice: either JS or 2FA.


I understand what you're saying and it makes sense. I think in my mind it's the fact that javascript has the potential to do so many things, not that it's being used that way today.

To use a bad car analogy, the in-car entertainment used to be just a dumb radio. Now that it's a computer connected to the main car network, it has a lot more potential to do things, whether it's a feature, bug, or an exploit.


It's unfortunate that your work is being used in such a massively economically wasteful manner.

The core issue is - it's all computers. The reason you can't reliably detect the difference between an average user using a computer to log in to your computer versus a fairly sophisticated computer using a computer to log in to your computer is that the transition between user and tool is not smooth. It is always going to be easier to find the various boundary points between user and tool than it is to construct a passable simulation of the user using that tool.

Yes, bot detection does make account hijacking attempts more expensive, but it makes all logins more expensive, and the rate of expense increases faster for you than it does the account hijackers.


Thanks Mike,

so the way I understand it, roughly it works like this: JS tries to gather some set of information about browser's environment (capabilities, network access, reaction to edge-cases, …) sends it to google and google decides if they should allow user to continue authentication (by providing crypto signature of request+nonce or smth.like that… or just by flipping key in session)


> All mass account hijacking attacks rely on bots that either emulate or automate web browsers, and Google has a technology that has proven quite effective at detecting these tools

It sounds untrue and short-sighted. Even Google provides the tools for anyone to automatically navigate in a javascript-enabled website with Chrome Headless. All this will do is to provide a short-term security before the bots can again perfectly mimic humans with JavaScript enabled, this time.

Most "mass bots" can be stopped by logging the access attempts on the server side, with plain old HTML on the client side.


I'd be curious if the tools are only effective because of the fringe requirement of them. That is to say, right now, many malicious users are easy to spot using these, precisely because they don't have to be hard to spot.

That is to say, in the long run, it will be interesting if this actually reduces malicious use. Seems like it would be just as easy to avoid.


Thanks so much for chiming in here.

Admittedly, bot detection is really interesting to me as a subject. It's not the kind of thing you can throw infinite ML at and expect it to break even in terms of scaling; you need careful tuning and optimization baked in from the ground up, which means a fundamentally "manual" approach involving lots of lateral creativity and iteration. That creativity and one-upmanship (along maybe with the gigantic piles of money, because manual ;)) is what makes the field so interesting to me - but alas, I cannot pepper you with questions/topics/discussion/anecdotes/stories for two hours for obvious reasons :)

So, instead, a couple of questions, considering datapoints likeliest to benefit others, and perhaps (hopefully) provide some appropriately dissuading signals as well.

- How does Botguard detect Chromium running in headless X sessions? By looking at the (surely very wonky) selection of pages visited, or...? (Obviously IP range provenance is a major factor, considering the unusable experience Tor users reportedly have)

- Regarding the note about "bending over backwards", while playing with some old VMs very recently I observed with amusement that google.com works in IE 5.5 on Win2K but not IE 6 on Win2K3SP4 (https://imgur.com/a/zg7FoAW), entirely possibly due to broken client-side config, I'm not sure. In any case, I've also observed that Gmail's Basic HTML uses very very sparing XHR so as not to stackoverflow JScript, so I know the bending-over-backwards thing has stuck around for a long time. Besides general curiousity and interest in this practice, my question is, I wonder if this'll change going forward? Obviously some large enterprises are still stuck on IE6, which is hopefully only able to reach google.com and nothing else [external] :)

- I wonder if a straightforward login API could be released that, after going through some kind of process, releases Google-compatible login cookies. I would not at all be surprised if such ideas have been discussed internally and then jettisoned; what would be most interesting is the ideation behind _why_ such an implementation would be a bad idea. On the surface the check sequence could be designed to be complex enough that Google would always "win" the ensuing "outsmart game", or at least collect sufficient entropy in the process that they could rapidly detect and iterate. My (predictable) guess as to why this wasn't implemented is high probability to incur technical debt, and unfavorable cost-benefit analysis.

I ultimately have no problem that JS is necessary now; if anything, it gives me more confidence in my security. Because what other realistic interpretation is there?


I'm glad you're interested! The world could use more people tackling spam.

I'm not going to discuss signals for obvious reasons. Suffice it to say web browsers are very complex pieces of software and attackers are often constrained in ways you might not expect. There are many interesting things you can do.

I have no idea how much effort Google will make to support old browsers going forward, sorry. To be double-super-clear, I haven't worked there for quite a while now. Over time the world is moving to what big enterprises call "evergreen" software, where they don't get involved in approving every update and things are kept silently fresh. With time you'll see discussion of old browsers and what to do about being compatible with old browsers die out.

Straightforward login API: that's OAuth. The idea is that the login is always done by the end user, the human, and that the UI flow is always directly with the account provider. So if you're a desktop app you have to open an embedded web browser, or open a URL to the login service and intercept the response somehow. Then your app is logged in and can automate things within the bounds set by the APIs. It's a good tradeoff - whilst more painful for developers than just asking for a username/password with custom UI each time, it's a lot more adaptable and secure. It's also easily wrapped up in libraries and OS services, so the pain of interacting with the custom web browser needs be borne by only a small number of devs.


Couldn't the attacker just switch to imap/pop3 for those attacks? (And (why )aren't they doing that already?)


pop3 / non-oauth imap login attempts are blocked unless the user has explicitly opted in to allow those protocols:

https://support.google.com/accounts/answer/6010255


"Google had the ability to enforce a JS-required rule on login at least 6 years ago and never used it until now."

Um ... this should terrify everyone.

'We had the power to impose this, and we graciously chose not to. You should be thankful for what we have done.'

No. Just ... no.


What exactly? The fact that login owner can change the login system at any point?


Do you glow in the dark?


> Firstly, this isn't some weird ploy to boost ad revenue.

Everything Google does is a ploy to boost ad revenue. That's the whole business model.

> Without a doubt, it's being enforced for the first time due to some large account hijacking attack that has proven impossible to stop any other way. After so many years of bending over backwards to keep support for JS blocking users alive, it's presumably now become the weakest link in the digital Maginot line surrounding their network.

You're doing two things here:

1. Reasoning from no evidence, and ignoring a much simpler reason in the process.

2. Acting like the honest web users, the ones blocking malware-laden JS, are the ones who are wrong.

It's much simpler to conclude that Google engineers simply got lazy and decided to punt the hard work of security to some JS library, instead of looking at it honestly.


ITT: people dramatically under-estimating the risk to their accounts from credential stuffing and dramatically over-estimating their security benefits from not running JS.

They're probably right that not running JS is privacy accretive, but only if you consider their individual privacy, and not the net increase in privacy for all users by being able to defend accounts against cred stuffing using JS. The privacy loss of one account being popped is likely far greater than the privacy loss of thousands of users' browsing patterns being correlated.

tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.


Your first statement is incompatible with your second. (I think the second statement is reasonable, although I disagree with the conclusion).

People aren't underestimating the risk to _their_ accounts, they are discounting the risk to _others_ accounts.

That is, they're essentially saying, 'well, other users chose to have bad passwords, so bully them'.

I think that's a fair viewpoint to have. We've entered a world in which computer literacy is a basic requirement in order to, well, exist.

That said, what's reasonable, and what actually occurs, are two different things. A company isn't going to ideologically decide "screw the users that use bad passwords" if it loses them money.

So we get _seemingly_ suboptimal solutions like this.


I think you may be giving people more credit than they deserve, but I'm willing to accept that they're making that argument. Even if that's their argument, that their personal habits around password use and being attentive to not being phished are so good they don't need Google's help defending themselves, so bully for everyone who does, I'm not convinced it's a good one.

There are a few things needed for that to be a good argument 1) Their security really is so good (I'd bet it isn't. I saw a tenured security professor/former State Department cyber expert get phished on the first go by an undergrad.) 2) Google isn't improving their security posture on top of that (I'd be shocked if Google isn't improving theirs, and I'm certain having JS required to sign into gmail closes a major hole in observability of automation) 3) There are real harms from the JS being there for their security/privacy posture (as I've said elsewhere, I'm unconvinced Google is allowed by their own privacy policy from doing anything untoward here)

As to your point about computer literacy and existence, I think the sad truth is that computer engagement is required, but literacy is optional. When that's the case, large companies are in the position of having to defend even the least computer literate against the most vicious of attackers.


> I think the sad truth is that computer engagement is required, but literacy is optional.

You're right on, but I wouldn't call it sad.

The population is expected to operate vehicles without putting others in danger, not credentialize in how cars work. There are endless amounts of things we could demand people spend their precious time deeply understanding. We just like to demand tech-savviness because it's self-aggrandizing.

Like everything else, the solution is to help people on their own behalf.

At an online casino I once worked at, we ended up generating random passwords for our users. We had to, because otherwise attackers would lookup usernames in the large password dumps online and log in as our users. No amount of warnings on our /register page stopped password reuse. So we decided we could do better than that, and that "well, we warned you" was not an appropriate response.

If you look around at everyday objects, everything is designed to protect the user. But for some reason in computing we're still in the dark ages of snickering and rolling our eyes at users for making mistakes.


> The population is expected to operate vehicles without putting others in danger, not credentialize in how cars work.

Exactly! We require "car literacy" in drivers before we allow them to use them. Pretty much every advanced economy has mandatory driver licensing.

A driver can trivially press a few levers and slam themselves into a barrier at 100mph. But they don't do that, because they know, through experience and education, that it's a terrible idea.

That's the exact opposite of the approach that would have cars restrict their own usage into a narrow set of patterns and refuse to function otherwise.

WRT the last half of your comment: I think that's reasonable. Generating random passwords for users is a fair approach.

Account security exists on a spectrum. I don't think anyone (reasonable) is arguing against that, we're talking about mutable state here, actual _actions_.

What I'm railing against, is this idea that every webpage on the internet needs to be behind a CAPTCHA that does a bunch of invasive data collection including probably asking the user to perform a Mechanical Turk task in order to _access a website_ without even logging in.

It happens all the time. A website doesn't like my IP block -> forced through a bunch of nonsense. The site operator probably isn't even aware because they're using an upstream service which does it for them.


If by ‘cred stuffing’ you mean brute forcing accounts, that’s what short lockouts and 2 factor authentication are for. JavaScript is just a layer of obfuscation and doesn’t fundamentally help.


Credential stuffing more commonly refers to the practice of getting valid sets of creds from various password database dumps and retrying them across common/popular systems.

2FA is a good defence against it, but lockouts are less as they attacker will be going broad and not deep (could be a single request per user account)


Ip based lockouts as opposed to account based lockouts do better against cred stuffing. Because there is a cost to getting more IP adresses. Maybe carrier grade NAT would lead to too many false positives?


Most bad actors doing abuse at scale have access to large networks of proxies on residential or mobile IPs, usually backed by malware on workstations, laptops and mobile phones.

Even as a newcomer without the right contacts on the black market you can get started with very little upfront investment, using services like https://luminati.io/ (they pay software developers to bundle their proxy endpoints within their apps).


IPv6 addresses aren’t really scarce.


/64 and /48 are pretty much on the same order of magnitude as IPv4s in terms of difficulty of acquisition, and I don't know why you would ever look at more than /64 when most major operating systems randomize the last 64 bits anyway (RFC4941).


Cred stuffing is using botnets, you aren't going to have more than a couple login attempts per IP.


I don’t see how JavaScript is anything more than a bandaid for that. The assumption is the attacker has the usercode and password combination and then you want to prevent him from logging in.


I'm not up to speed with the latest and greatest of what java-script can do, but isn't the source code fundamentally user-visible?

We always used to laugh at people who did website security with javascript, the whole idea was that security processing had to be done server-side.


Javascript can be served dynamic as well, per user/connection specific even. So an attacker would have to investigate and counter each new version of the scripts. Even if this could be done automatic it greatly increases the cat/mouse factor for Google.


So, basically javascript is used for security through obscurity?


Obscurity is just another layer you add onto your security. As with all security methods, no one is perfect and its always a balance with usability.

But with security at this level nowadays every added layer helps. Even if it is not even used in the initial authentication step. Think of classifying certain patterns in the attacks and retroactive de-authorizing after login, increasing the time-cost for the attacker.


security through obscurity may be no security at all, but security without obscurity is probably not as good as security with obscurity for many security scenarios that one can imagine.


Can't you use Javascript to implement challenge-response authentication, which meaningfully improves security by:

1. Preventing interception of passwords on the wire

2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective

3. Requiring that brute-force attackers either run a Javascript interpreter (dangerous, because the web site chooses what they do and could make them mine Bitcoins) or rewrite their brute-forcer each time the JS-driven network communication channel is altered

It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification...


>1. Preventing interception of passwords on the wire

Isn't this solved by https? I have no idea, but I hope at least that https protects my passwords.

>2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective

I don't want to wait for a login more than a second. Actually, I don't want to wait at all.

>3. ... or rewrite their brute-forcer each time the JS-driven network communication channel is altered

How is this different from altered HTML/CSS? An attacker has to adapt to the altered login page. It is not an argument for javascript.

>It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification...

You say it: a protocol! not a piece of javascript.


> Actually, I don't want to wait at all.

Neither do I. But I also accept that, given the sheer volume of stolen creds and bots out there, sites that damage their bang/buck performance, even at the cost of very minor inconvenience to users, are likely to be targeted less frequently and in lower volume. Even if I wasn't begrudgingly willing to pay that price, I'd at least admit to the logic of making the process more time-consuming as a deterrent.


> I don't want to wait for a login more than a second.

Do you log in that often?


I find the idea of detecting someone's trying to bust your login page with some kind of automated system and deciding to serve them a ridiculously aggressive Bitcoin miner rather amusing.


You risk setting up a little cold war that you probably don't have time for though...

"Think you are clever, eh, try this for size..." -- some attackers in response to being affected by your counter measures.


Whats stopping that war from happening at any other time? If a attacker has the resources and carelessness to mount such an attack at a whim you should be prepared for it?


Nothing, but trying to hack them back seems like a way to invite more personal attention than your services might otherwise get.


I believe some US banks have been doing this for a while.


> Can't you use Javascript to implement challenge-response authentication

> 1. Preventing interception of passwords on the wire

It can, but challenge-response that isn't PKI based requires the remote side to have the secret stored or the local side to know how to generate the value that is stored instead, which goes against other recommended practise (with PKI the remote side can store the public key and ask for something to be signed with the private key).

Protecting passwords on the wire is better done with good encryption and key exchange protocols - in the case of web-based systems that is provided by HTTPS assuming it is well configured.

> 2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective

Could you give an example of that? If you are tuning difficulty based on the computation power of the other side, surely the other side could lie about being low powered and get an easier challenge?

> 3. Requiring that brute-force attackers either run a Javascript interpreter (dangerous, because...)

A knowledgable attacker doing this would be safe: they'd make sure the interpreter was properly sandboxed (to avoid reverse hacking) and given execution resource limits (to avoid resource waste). Then if the site/app is important enough that they really want in, they modify their approach if the resource limits are hit.

> or rewrite their brute-forcer each time the JS-driven network communication channel is altered

If your method is only used by you (and you aren't a Google or similar so you are big enough to be a juicy target on your own) and you enter into this arms race you might find it takes so much resource that it gets in the way of your other work. You are only you, the attackers are legion: put one off and another will come along later. Also there is the danger in rolling your own scheme that you make a naive mistake rendering it far less useful (potentially negatively useful: helpful to the attacker!) than your intention.

If the method is more globally used then it is worth the attackers being more persistent.

> It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification.

It can, though often only against simple fully automated attacks. Cleverer automated attacks may still succeed, as may more manual ones, and targetted manual attacks will win by inspection & replication.

Or they get in through an XSS, injection, or session hijacking bug elsewhere (bypassing the authentication mechanisms completely) that you missed because you spent so much time writing an evolving custom authentication mechanism.


Regarding point 1, you can combine 'challenge response' with diffie helman to

Have the server side not know the password

And be secure against replay attacks given attacker access to plain text.

The scheme is something like:

Setup

Server generates key (x, xG) where G is some elliptic curve base point. It stores x and sends xG to the client.

The client computes y = H(password) and sends yG to the server.

The server stores the shared secret. x (yG)

Authentication:

Server generates nonce r and sends r xG.

Client computes y = H(password) and responds with y r xG

Server verifies that the response equals r (x yG).

End

In this protocol, an attacker with access to plain text, even during setup, still can't do anything.

This method is weak against MitM, but that can be solved on auth by doing a fully ephemeral diffie helman there.

I concocted this scheme on like 10 minutes, so there might be mistakes, and it os probably suboptimal.


No not necessarily, and if it is, it's bad practice.

Client-side js will always be readable, even if you obfuscate it, you can't trust it to never being decompiled.

But server-side js never has to reach the client, it can be used to dynamically generate basically anything.


> Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses

Why is it not sufficient simply to throttle logins at the server?


Modern cred stuffing is done by botnets. When I see a cred stuffing attack, it's maybe 1-3 attempts per IP address spread over 100-500k IP addresses. Often you'll have a family of legitimate users behind an IP address that's cred stuffing you at the same time.

Throttling by IP address may have worked 10 years ago, unfortunately it's not an effective measure anymore.

Modern cred stuffing countermeasures include a wide variety of exotic fingerprinting, behavioral analysis, and other de-anonymization tech - not because anyone wants to destroy user privacy, but because the threat is that significant and has evolved so much in the past few years.

To be entirely honest, I'm kinda surprised Google didn't require javascript enabled to log in already.


Any advice on where to read more about these modern cred stuffing countermeasures? I'd love to learn more.


Unfortunately I don't have much reading material to provide. It's a bit of an arms war, so the latest and greatest countermeasures are typically kept secret/protected by NDA. The rabbit hole can go very deep and can differ from company to company.

The most drastic example I can think of was an unverified rumor that a certain company would "fake" log users in when presented with valid credentials from a client they considered suspicious. They would then monitor what the client did - from the client's point of view it successfully logged in and would begin normal operation. If server observed the device was acting "correctly" with the fake login token, they would fully log it in. If the client deviated from expected behavior, it would present false data to the client & ban the client based on a bunch of fancy fingerprinting.

Every once in awhile, someone will publish their methods/software; Salesforce and their SSL fingerprinting software comes to mind: https://github.com/salesforce/ja3


A relatively successful company in the area is Shape Security. Their marketing is a bit painful, but they invented the concept of cred stuffing. Disclaimer: I worked there for four years.


Fundamentally it's a question of fingerprinting the behaviours of humans versus bots. The problem is that it's becoming increasingly difficult to distinguish them, particularly when bots are running headless chrome or similar, and real users are automating their sign-ins with password managers.

I don't do much of this sort of thing, but numerous things come to mind. Aim to identify and whitelist obviously human browsers, blacklist obviously robot browsers, and mildly inconvenience/challenge the rest.

For example, an obvious property of a real human browser is that it had been used to log in successfully in the past. Proving that is left as an exercise for the reader, though it inevitably requires some state/memory on the server side.


This is a rare paper published on the topic.

https://link.springer.com/chapter/10.1007%2F978-3-319-07536-...


A company I am considering investing into: https://fingerprints.digital/


Are they looking for funding? They appear to be privately funded.


They have been at https://www.wolvessummit.com/ - they are preparing for a funding round. You can find them at other events listed on their page: https://fingerprints.digital/event/


But you don't have thousands of families logging in from thousand of different servers: in your case a max of 10 login attempts would prevent it.


Throttle based on what? IP address? This works for domestic IT departments looking to shut out automated attempts from specific ranges but at Google's scale IP based filtering could end up shutting out an entire country.


> Throttle based on what?

User Id?


That's a terrible idea. Back when MSN was one of the most common instant messengers, there was a common prank that was called "freezing" where you just continuously kept trying to log into someones account and it would lock itself out for 15mins or more depending how long you kept doing it.

There was automated tools that did this too!


That's the first obvious countermeasure and will prevent hackers targeting a specific account. But there are other ways to crack passwords, one is to try the same password but iterate over user ids instead. As hackers would start with the most common password you can't throttle globally on same password attempts either because well yeah, it is by definition the most commonly used one which should have a lot of traffic.


Google can ban common passwords, or passwords that look like they’re being targeted (over the long-run).


This has nothing to do with anything but I don't know how else to get in touch with you. Could you upload your zero spam email setup guide somewhere? Your site was hacked so the link I had doesn't work:

http://iamqasimk.com/2016/10/16/absolutely-zero-email-spam/


I’m sorry, I changed the domain to QasimK.io, but neglected to set up forwarding. I will do that.

http://qasimk.io/2016/absolutely-zero-email-spam/


"Credential stuffing" as I've heard it used refers to taking username/password combos from one breached site and trying them in other sites.

So for example LinkedIn has a breach, which reveals to evildoers that user 'johnsmith@example.com' uses the password 'smith1234' then they test that username and password in Amazon, Netflix, Steam and so on.

They only make one attempt per account, because they only have one leaked password per account. Hence, throttling per account isn't an option.


That would create an easy denial of service attack: if I wanted to deny you access to your account I'd spam it with bad login attempts.


Happens weekly to my Sears account.


With credential stuffing, isn't it unlikely the perpetrator wants to make more than one or two attempts per user ID?


Which country uses a single IP address for all its devices/citizens?


All of Qatar's traffic used to be routed through 82.148.97.69, though that was back in 2006-2007. At one point it was banned from Wikipedia, which unintentionally affected the whole country.

https://simple.wikipedia.org/wiki/User_talk:82.148.97.69


China Telecom does something weird with NAT, not sure what exactly but I've seen it mentioned here before


And indeed it's time to give up on the web being a document format only. The internet is about loading remote applications in your local sandbox. That's what it is. It sucks, but it is what it is. As part of loading remote applications, we now might be asked to compute whatever anti-abuse puzzles are required. So it goes.


If something shitty is happening, you don't have to shrug your shoulders coswhatyagonnado. Understanding the human reason why something shitty is happening doesn't mean you have to accept it. So it goes, until it doesn't.


Passwords are obsolete - actual security would involve keys. The fact they have to care about automation for security instead of availability is a sign they have already lost. If you have a disposable EC2 server administration password accessible you are already doing it horribly wrong because you /will/ get attacked frequently.

Javascript is opening an attack surface for what will certainly turn into an arms race anyway instead of ending it.

Given that they aren't pushing a new standard for what has already been a problem for a long time while introducing a vector for abuse both to and from it google can be criticized for both of those sins far more.


To be fair, Google released their own OTP hardware keys and have already 2FA login mandatory for accounts that they deem "high risk."

I don't think it's fair to blame them for the facts that most folks are not willing to give up passwords yet. Given that passwords are the current reality, shouldn't they do everything in their power to make them as secure as possible?


Calling other people, or their opinions, shortsighed and self-centered is usually not the start of a good conversation.


I'll be the guy who says that while I recognize those are insults, they are also sufficiently descriptive of a point of view... it's not like he called someone Mr. Poopypants.


You're right. I was shortsighted and self-centered.


To a good conversation no. But not always is a conversation what's desirable.

Sometimes one just wants an accurate depiction of a situation -- and those might still be totally accurate characterizations...


I mean, why not cut to the chase?


So what about a opt-out at account level? Something in the account settings, like this:

[check] Allow sign-in from javascript disabled browsers. WARNING etc. (usual warnings about security etc.)

Edit: because users who know to use long passwords and 2FA do exist and don't need all that extra security stuff ...


> So what about a opt-out at account level? Something in the account settings, like this:

> [check] Allow sign-in from javascript disabled browsers. WARNING etc. (usual warnings about security etc.)

It sounds a bit like what Gmail's doing with their "allow less secure apps" login option, except that's more for allowing IMAP logins using password instead of OAuth.


I used a long and supercomplicated password for one of my accounts that i access intermittently. Why I have it is a long story, but I only log into it once or twice a month to check if there is something that needs my attention.

Usually the login is in incognito, guest mode, and even from different locations and machines. Google asks for a second factor (i dont have it on for my accounts) like phone verification for my usual accounts (not so complicated password) but not for the one with complex password. So I think the level of extra steps/security is linked with how complex your password is. Not so sure if this is a good thing or bad. But, I hope they should continue basing their security measures based on the security measures you take.


I asked LastPass to generate me a long and complicated password for a new Office 365 account only to have it rejected as too long because it was over 16 characters. Sigh.


It's probably the switching of devices that raises the level of security.


Maybe one reason is because google doesn’t know which account is trying to login before the login page, so how could they remember that security setting before attempting to serve JS?


I don't understand why anybody concerned about having JS on a login screen would want to log into Google in the first place. I imagine there's a tiny overlap between "Runs NoScript" and "Trusts Google"


It costs money to support and a miniscule amount of users would care.

The majority of Google's customers also don't pay for an account.


Even when I ran noscript Google domains were allowed. It's extra step for a small number of technical users.


> ITT: people dramatically under-estimating the risk to their accounts from credential stuffing and dramatically over-estimating their security benefits from not running JS.

Password are effectively obsolete and everyone should be using multi-factor authentication of some kind. Keys with passphrases. 2FA auth. Whatever.

Making 2FA auth mandatory would be substantially more effective than bot signaling.

> tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.

If they were, 2FA auth would be mandatory with additional phone-based (i.e. SMS) whenever you try to login from a new geographic area. That would stop anything short of a targeted hack.

Instead, they created an attack on the bot maker's profit margins. Cloudflare, Google, et al. are really just trying to increase the cost of making bots. They are not really trying to _stop_ bots.

Stopping bots requires making unpopular choices.


XSS vulnerabilities are everywhere. You obliviously don’t realize that.

Note that I do use js, because it makes life easier. But you got to realize that not using js will at some point protect you against an XSS vuln. They are that prevalent.


There's absolutely nothing in the above comment that indicates the person you're replying to doesn't know the prevalence of XSS vulnerabilities.


I'm pretty familiar with XSS prevalence and I agree with them.


Running JavaScript means parsing text from outside source plus executing the program from outside source. Both requires really complicated code counted by the unit of M LOC(Mega Line of Code).

It's still under-esitmating.


> The privacy loss of one account being popped is likely far greater than the privacy loss of thousands of users' browsing patterns being correlated.

That's quite the hand-wave. How do you even measure privacy loss? And given that browsing history is not in your inbox, why are you so confident that one compromised email account is a bigger deal?


This is coming right after the reCAPTCHA v3 announcement

https://news.ycombinator.com/item?id=18331159

Sorry, you don't have enough Google Points to browse the web. Please enable JavaScript and install Google Chrome.


Recent new version of Google Mail flat out doesn't work to any usable standard in Firefox. Ten seconds to open a new 'compose mail' window. A context menu does a multi-second HTTP fetch before showing. The previous version worked great.

Either the dev team has just given up on quality or they're intentionally goading me into installing Chrome. I'm not going to play that game -- at this point Thunderbird works better.


Switching email providers is reasonably painless, fwiw. Set up forwarding, migrate mail when you can.

Even better if you set up the majority of your non-security-essential mail to be at your own domain, hosted by Fastmail/etc. Then you can easily change your email provider and your contacts don't even care. I've yet to implement this is in my own life, I just switched to fast mail - so I can't speak from personal experience on the domain portion of it.

NOTE: I mentioned non-security-essential email in reference to things like, your bank login or things that could threaten your life essentials. I say this because theoretically (and has happened before), using your own domain increases the attack surface area. My personal plan is to setup custom domain email with Fastmail, but still use the plain me@fastmail.com for my security focused emails. The majority of my email will still be based on my custom domain for easy portability, but I plan to avoid that for my bank, for example... assuming fast mail lets me.


I can speak from experience regarding FastMail because that's exactly what I did. In fact, I migrated off a grandfathered Google Apps account with my custom domain to FastMail with that same domain. Yeah, it's a bunch of steps, but I'm very comfortable with making DNS changes. My wife and I have an account; it's worth every penny.

Also, FastMail allows for subdomain handling. I use this feature with nearly every site. You can have *@<YourFastMailId>.<YourDomain>.com route to <YourFastMailId>@<YourDomain>.com just as you'd expect. The way this handling works is even configurable.


Another very happy user of FastMail here, with our own domain. I initially was excited by subdomain handling, but switched back to only using my main account.

Using FastMail-specific features will lock you into this specific vendor once again, one of the main reasons to switch in the first place!


To be fair, how FastMail does catch-all delivery like this is standard and easily reproduced st any mail vendor (except Office 365) that supports catch-all, which is most of them. I use a catch-all address with FastMail that is @asubdomainichose.mydomain.org and it is the same subdomain I used with my previous setup before moving to FastMail.

Using a subdomain for catch-all is great because spammers can’t easily discover and flood the subdomain.


I'm in the weird Google Apps for Your Domain limbo right now myself. I've wondered what would happen if I switched to something other than GMail but kept my google account with that email address.

I know a long time ago you could set up a Google account using a non-GMail email address but I'm not sure if that's even a thing anymore. That's what I want though. Keep the email address with my own domain that I've used for 17 years and just have a regular old Google account using that email (and keep all my Google services and purchases associated with it).

Google has been absolutely terrible to Google Apps for Your Domain users (who were often Google's biggest supporters back in the day). They've been shoved into this weird second class status where their Google accounts only partially work with Google services. I completely regret ever setting it up.


You absolutely can set up a Google account with any email address you want.

https://accounts.google.com/SignUpWithoutGmail

I use Google services heavily at work, all on a Google account that was created with my work email address. And we are not a Google shop; my employer's email is self-hosted Exchange.


You can continue using your email address for your Google account even if you've got someone else handling the mail now. You can also sign up for a Google account with an email account from any domain or provider.


I switched to Fastmail years ago and it was the best mail-related thing I ever did. I was dreading the migration but it literally took ten minutes, switch DNS records (I have my own domain), run Fastmail's import, done.

I still can't believe how fast the UI is. It's by far the fastest web app I've ever used, and the same goes for the service in general.

Seriously, just ditch Gmail now, the alternatives are great.


Gmail is more than just mail, it's also integration with other Google services, like calendar. How does Fastmail fare in that regard?


FastMail supports CalDAV. I use my FastMail calendar with Thunderbird (Lightning) and on my iPhone; works great. They also support CardDAV for contacts. /satisfied FM customer since ~2008 or so


I had to purchase a CalDAV and CardDAV app (which were extremely cheap, mind) for Android, so it's not quite as plug'n'play there.


Why is jjawssd's (sister) comment dead? Davdroid works great and is free (as in beer and speech), though I would encourage people to donate if it's useful to you.


I wouldn't know, I have a self-hosted calendar. From the little I've seen, though, the calendar part of Fastmail is very good too.


Which self-hosted calendar do you use? would you recommend it? I'm in the market for a new one, but the current offerings that I've seen aren't great.


I use Radicale and find it great, but there's no UI, so you need to use whatever client you want that supports CalDAV (I use Lightning and the calendar on my phone). Lately I've been liking Nextcloud a lot, and that's a one-stop solution for lots of things, so nowadays I would recommend that if you have a home server or want to pay someone to host it.


Thanks!

I've looked at nextcloud, but IIRC, you have to have the whole suite installed, right? I'd love a way to just use the calendar function.


Yeah, you do. As I said above, Fastmail's calendar is very good too, and you can load your self-hosted/CalDAV calendars into it, so that's a good option.


Last time I tried fast mail they didn’t really support labels, only folders. Is that still the case, or is there a good workaround?


FastMail is standards-based, so it does not support labels. This is a good thing, and you should stop depending on Google-specific proprietary features. Even when I was on Gmail, I had a lot of issues with labels because the third party mail clients I needed to use didn't support them. The inbox tabs I ended up replacing in Gmail with rules/filters, that moved my social updates, for instance, to an actual social folder which worked properly on third party clients.

That being said, FastMail is also the leading developer/champion of a new mail standard called JMAP, which supports both labels and folders. I suspect, therefore, if it takes off, they may consider supporting labels themselves.


I used fastmail for a year and can recommend it, but if you’re European you should probably look up runbox instead as it’s housed in Norway.

That’s what I eventually switched to and it works fine.


> assuming fast mail lets me

It does let you, you can create as many aliases as you want (I'm assuming) on any of their or your domains.


Fwiw, I use Gmail exclusively in Firefox and have no problems at all. (And my machines are fairly dated.)


I've recently switched to MacOS's built in mail client with IMAP to Gmail, never have to wait for my UI to do something. So count me in as surprised how far gmail has gone downhill.


That's the thing that gets me. So many optimisations have gone into user interface software over the years. And some of the stories of early Apple work, like 'round rects [0]' are truly inspirational.

I wrote software using Cocoa about a decade ago (so I may be out of touch), and it was clear how much thought and effort had gone into making the user interface responsive. And it generally shows.

The idea that you would just give up on that precedence is baffling. And let's face it, email's important but it's not rocket science.

[0] https://www.folklore.org/StoryView.py?story=Round_Rects_Are_...


Never liked webmail anyways.

Thunderbird might be superior, but I really like that App because it's so light and fast.


What version of Firefox are you running? You are either exaggerating greatly or have other issues with your system. I run the latest stable release of Firefox and the performance of Gmail (particularly the features you mention) is fine. I’d be happy to upload a screen recording to verify.


He's not the only one. It's a recurring comment here on hacker news and a problem I've encountered as well, and I'm running the latest stable release.


Same here, I run the latest Firefox on both Windows and Linux. Gmail always takes at least 5 seconds to load.


If you experience a reproducible Firefox performance problem, please consider using the Firefox profiler add-on [1] to record a profile and file a bug with "[qf]" to the whiteboard field. These "[qf]" Firefox performance bugs get reviewed by engineers twice a week. Having a profile makes the bugs much easier to diagnose.

[1] https://perf-html.io/docs/#/


I think that Chrome also suffers on this front? But it's better at doing pre-fetching than Firefox is

This could really just be that part. I have a hard time imagining explicit sabotage of FF on the gmail frontend. The likeliest explanation is that perf testing and the like only happens in Chrome


The classic question to ask at this point is whether this affects you with a fresh install, or only once you have added all your extensions?


Firefox 63.0. Fibre internet connection. 3.1 GHz Mac, 16 GB RAM.

It's tricky to share a screen recording because there's personal information. But I just did two for my own curiosity. From a fresh load, once the "Loading Gmail" screen has gone away, it took 8 seconds and 11 seconds respectively from clicking 'Compose' to having a new window open.

Maybe there is variability. There are a million combinations of factors out there. I suppose as an engineer you make the trade off of "do I hope for the best case" vs "do I make something that works for a broad audience". The previous version shows that they can make something that works for my own anecdatapoint if they want to.


I have a lot of issues with google apps for business. Sometimes I have to refresh the browser 5-6 times before it will display any email in the primary inbox as well.

It's just horrible to use in firefox (in arch linux) and I'm currently looking for a new provider.

I might just go all in and use protonmail.


Protonmail is great but doesnt offer custom domains. If you need domains people mostly mention fastmail but i think there are far better choices like mailbox.org and kolabnow. Mailbox does not look like much from their homepage but it has awesome web client and it extremly reliable private provider thats in bussiness from 90s. I had account there for last 5 years without single problem.


This is actually not quite right, we have offered custom domain support since 2016 :)

https://protonmail.com/support/knowledge-base/custom-domain-...


I've experienced both very fast and very slow with the new Gmail on the same machine with Firefox on Linux. It's currently faster than Chromium, but maybe tomorrow I'll see ten-second load times. Who knows? For the record, I use uMatrix (and uBlock Origin on easy mode for client-side cleanup), which might be affecting it somewhat.


>and the performance of Gmail (particularly the features you mention) is fine

>ten seconds to load your inbox

>16 GB, i7, SSD, 100 MB/s internet etc.

>fine


Exactly what I was thinking. How is that even close to fine? WTF?


I’m a little surprised to read this because Google Mail works fine for me in Firefox (ArchLinux). In fact it’s smoother than some of the Electron-based clients I’ve tried and less painful than trying to get push messages on Thunderbird working (sure, there is always IMAP but that requires regular fetches).


FIY, IMAP actually allows "push messages" via the IDLE extension. If you use K9 on android, it's enabled by default. I never used gmail, but I'd be surprised if the gmail imap server didn't support it (and I would dismiss gmail entirely if it didn't).


Is this a new thing? I don't recall seeing an option for that in Thunderbird (desktop version by the way; not the mobile / Android version) the last time I looked (~9 months ago).


IDLE is an old extension. Unfortunately Thunderbird is a crappy client.


What would you recommend then?

I don't use Thunderbird myself - was just following the discussion on from the OP who did use it. However I've yet to find a client I like so genuinely interested in any suggestions you might have.


I'm currently using mutt with "getmail" (which does support IDLE), which I can recommend -- it's an excellent client, but only if you're fine with tweaking.

I used TB until two years ago, but I gave up with it's unfixed bugs and quirks. I do prefer graphical clients, but not if they are clunky or buggy.

I used Silpheed and Claws for years, but Silpheed locks (or used to lock) the UI during fetch (unacceptable IMHO) while Claws has some critical bugs in the filter/rule logic that made me lose mail in several occasions by refiling into the wrong folder while processing a lot of messages. If you arent't a heavy filter user you might be fine with it though, I think Claws gets a lot of things right.

KMail wasn't bad when I used it, but it was too long ago to make an honest comment today.


Same issue here. Mails not loading, poor initial load time. That is with zero extensions enabled.

I am now using mutt/notmuch/mbsync to prevent having to go through their horrendously slow web interface, and eventually move away from Gmail completely (probably to ProtonMail or fastmail).


> A context menu does a multi-second HTTP fetch before showing.

Where? The only one I can trigger that does any kind of network is in the inbox, and that's only to get some icons. The text for the options is already loaded.


I'm talking about the RSVP box for integrated calendar invites. Not a right-click context menu.


Yup, that was all the push I needed to migrate all my Google stuff to fastmail.


I have the same problems on Chrome.


Ditto, they really need to work on speed on the new gmail.


Firefox on mobile or PC? I haven't experienced that on PC.


I've had the same experience. poor performance and display anomolies.


If you enable privacy.resistFingerprinting in Firefox you automatically fail v3 Captcha with score 0.1 People who want to try it out: https://recaptcha-demo.appspot.com/recaptcha-v3-request-scor...


Thanks! I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1503872

When we have time we'll have to trace through what it's doing and what components of RFP are causing the failure. (If anyone wants to do that and report in the bug, we (Mozilla/Tor) would much appreciate the contributions!)


Sounds like the feature is working exactly as intended and the problem is with reCAPTCHA.


Google has becoming increasingly annoying. Every time I browse from work, where I have to use Internet Explorer, I have to suffer Chrome ads. They also require me to solve a Captcha every time I change the number of results per page via the search settings.


> Sorry, you don't have enough Google Points

In the past few months all our domestic devices have gradually hit that notional condition with Google Search. All the laptops one by one, and then last night my phone. My wife's phone is the only one that can still use their search without a ten-round Recaptcha challenge.

As each device was locked-out from Google I switched the default over to DDG.


Is there any traffic coming from your IP that would make the Google fingerprinting bot angry at you?


Me too. Its creepy being locked out of half the internet for daring to block ad servers.


What is especially interesting is that this will allow Google to track you on more pages, but that in this case, you can by definition not block the tracker. I've checked, but reCAPTCHA just falls under the general Google Terms of Service.

I don't believe this to be done with that goal, but it is an unfortunate side-effect.


ReCaptcha is like Cloudflare's free DDoS protection: we like to point at these services and complain how people are "ruining the web" by using them because that's what we do on HN. We ignore the big picture and whine.

But I encourage everyone to consider a darker reality: that centralized services by large companies are becoming more and more necessary in a world where it's becoming easier and easier to be an attacker. The internet is kinda broken. Like how half the ISPs in the world don't filter their egress for spoofed IPs because there's no real incentive. That every networked device in every household could unknowingly be part of a botnet because we aren't billed for externalities.

Yeah, maybe it's kinda spooky that now ReCaptcha v3 wants to be loaded on every page. But is that really the take-away? What about the fact that this is what's necessary to detect the next generation of attacker? That you can either use Google's omniscient neural-network to dynamically identify abuse or you can, what? Roll your own? What exactly is the alternative?

Do HNers think this stuff is a non-issue because nobody has every attacked their Jekyll blog hosted on Github Pages (btw, another free service by a large company)?


That is exactly what I was trying to say with the final line in my comment: I do believe that this is necessary; it's just unfortunate that it comes with the tracking side-effect.

So no: the take-away is that this improves reCAPTCHA. A side remark to that is that it also improves Google's ability to track you, and hampers your ability to fight that.


Haha yeah, thanks for the recommendation Google. Chrome is never gonna come back on my PC.


reCAPTCHA can go frick off into a hole. I've stopped using all websites that use reCaptcha because it takes me sometimes 10 minutes to login to them. I also don't feel right providing free data so Google can help a military drone bomb children on busses one day.

I miss old captchas.


> I also don't feel right providing free data so Google can help a military drone bomb children on busses one day.

reCAPTCHA v4: please click on all the pictures of insurgents.


They are such a pain point. Especially if you fill out a form accidentally, and have to go through the re-captcha again, and again and again for the most mundane of services.


Tbf you don’t require Google to browse the web.


There are also employers who don't treat their employees like children.


"When your username and password are entered on Google’s sign-in page, we’ll run a risk assessment and only allow the sign-in if nothing looks suspicious."

In my experience (it is already the case with gmail and outlook up and now), this means I will not be able to login to my account when in holiday in another city, country, or when I use a borrowed device, or when I am behind VPN/Tor, etc, unless I give google my phone number, and can afford to get a call / sms at that point of time and unblock the account.

It should be my choice, as it is my account that is at risk, to turn on/off such dubious security measures. It is fine to have these features on by default, but I would like to turn this particular feature off for my account. Any clever "risk assessment" thing where a computer decides without an option to turn if off/on is problematic.

I have sometimes the feeling they know this and it is on purpose. They want not only to collect data, they want to collect high quality data and these measures help to clean their data sets at time of collection.


I travel frequently and have multiple Google accounts (3x G-Suite and one Gmail) and have never had any problems accessing any of them anywhere in the world. I do occasionally get alerts saying that they've blocked a login attempt from India or South America, though. It seems their system works pretty well.


This is actually surprising enough that I'm glad to hear it even as an anecdote.

My experience with changing devices or cities (or god forbid both at once) is that it always requires further authentication, and often fails outright. I have an account which is simply disabled because I didn't set a recovery phone # or email and then changed machines. Everyone I've ever discussed the topic with has described similarly pervasive problems.

Which makes me wonder: what's so different between usage patterns? Obvious Google's auth approach is working for lots of people, so what's distinctive about this block of users who it's constantly failing for?


> In my experience (it is already the case with gmail and outlook up and now), this means I will not be able to login to my account when in holiday in another city, country, or when I use a borrowed device, or when I am behind VPN/Tor, etc, unless I give google my phone number, and can afford to get a call / sms at that point of time and unblock the account.

That's exactly it, there's already two Gmail accounts from high school I can't access despite knowing the passwords.


I get this on my own computer that I've been using for 6 years on a static IP. It happens at least once a month, sometimes several times. Each time they ask for a phone number confirmation when no phone number is linked to the account (and never will be).

Google™ employees have come in and found mind-bending ways to excuse it when I've mentioned this before.


I also have no faith in their risk assessment. For a very long time I have only used one computer from one location to log into my Gmail account and every time I log in they consider it a suspicious activity. They even forced me to confirm my identity on my last login. What's their risk assessment doing if it can't get the baseline right?


What makes you think that their end goal wasn't getting your identity confirmed?


I don't think they are questioning Google's end goal, but more the effectiveness of the current system.


It used to be the case that these checks are not in place when you're using 2FA. The downside is that you cannot use it without using a phone to register in the first place (though you can use your own generators afterwards)


Yep, this happened to me when I created a separate account for travel. Immediate, permanent lockout.


They're giving you a free account to burst out mails with. Your account will most likely contain a lot of private or privileged information about other people, e.g. their mails, pictures, contact data, etc. You have a responsibility so why should you be allowed to reduce the security of your account?


Because I "have a responsibility" if it is truly mine I should be allowed to.

But just as you said, they are giving it away for free, so it is technically theirs, we are not paying customers. (Except for G-Suite users)


> But, because it may save bandwidth or help pages load more quickly, a tiny minority of our users (0.1%) choose to keep it off. This might make sense if you are reading static content, but we recommend that you keep Javascript on while signing into your Google Account so we can better protect you.

They don’t seem to explain why though? Did I miss it? Are they fingerprinting the JavaScript environment of my browser? Why? The 0.1% are the people who would like to know why they need it, but this message is written ironically for those who don’t know what JavaScript is.


Additionally they imply the only motivation for disabling JavaScript is to increase performance and decrease bandwidth. They conveniently don’t mention the other, arguably more prevalent motivations: to increase privacy and security.


Yeah, it struck me as extremely disingenuous; while I like the other benefits, I disable JS mostly because I hate being tracked and noticed that many (most?) browser exploits require JS to run.


...and speed, and decreasing the amount of arbitrary code execution on your machine.

Most people don't disable JS entirely, but use something like uMatrix or noscript. It takes more work, but you can turn off a significant number of things that just don't need to be executed and get around a lot of annoying modals and paywalls (or see a lot of blank pages; that happens a lot too).


0.1% of Google accounts is a huge number of people. Millions?


They are trying to detect state actor hacking and track individual devices and whether they are new or impersonations of devices.

Some states have mitm certs on all their domestic machines but (hopefully) not much competence except on whatever schedule they buy updates.

I would be implenting a u2f soft client in js if I were Google. IMO you need a private key a state would need to retrieve by tampering with js and that isn't being sent over the wire with every connection. (Just to give them their first level of headache, when it comes to transitioning from observing to impersonation.)


To keep your account secure, turn on Javascript?? If anything is making your web browsing less secure, it's JS.

I don't particularly care that Google isn't letting you sign in without JS, but the message is just plain wrong..


I'm not sure about you, but most people with js "disabled" don't browse with javascript disabled entirely, but instead use a whitelisting/blacklisting plugin, otherwise they don't be able to access many essential sites (eg. banks). under this setup, whitelisting google isn't going to decrease security unless you think they're going to serve a 0day when you sign in.


Passwords can be hashed directly client-side with javascript, which is way more secure than sending them clear on the wire, so i dont disagree with Google's stance here and dont understand the hate


Hashing passwords client side has no benefit if a site uses HTTPS.

If a site uses HTTP, then hashing the password client-side and sending it up to the server is equivalent to sending a clear text password. If an attacker can already read your traffic, what is stopping them from using your password's hash to log-in to your account?


It stops them from using the password to log in to your other accounts.

It stops a compromised server from silently leaking unhashed passwords.

It makes password hashing user auditable.

You could even do a call and response model to stop the hashed password to log in at all. Here is a primitive scheme for such a model (public key crypto probably enables more clever schemes, not sure):

- Upon signup, generate hashes of "$password$site$i" for i in 1 to 1000. Send these to the server and have the server hash them again.

- Upon login, after the user has entered their password into the box, send an integer from i from 1 to 1000 to the browser, have the browser send back the hash of "$password$site$i".

Now a compromised hash can only let you log in 1 time in 1000. Combine that fact with the other available signals for "is this who we think it is" and you should be able to reject people who stole the hash reasonably reliably. Meanwhile since you are still hashing the password on the server (again) you have lost literally nothing but a tiny bit of computation time.


Use a password manager and don't reuse passwords. If your randomly generated, unique password has good enough entropy then why go through all of the trouble of the rest of the client side hashing?

There's nothing stopping you from hashing your own passwords client side and sending your bcrypt hash up to the server except some sites still truncate the passwords to 32/16 chars etc.

When you have the need for the level of security, client side hashing will not be as good as dedicated HSMs that many services now use on authentication.

Writing your own crypto flows can be extremely dangerous as you open yourself to all kinds of side channel attacks.


A password manager is a client side method that only works for people who opt into it, Google needs to deploy a server side method. Likewise with hashing my own passwords client side. HSMs.

As for writing my own crypto. Indeed, if anyone actually used the scheme I suggested they would be making a mistake. I wrote it not to be used but to demonstrate that we can do better in an easy to understand way. Unlike me, Google has the resources to read the papers, do the math, carefully implement this, and do it properly.

Keywords for how to do it properly include "zero knowledge password proof" and "password authenticate key exchange".

PS. It's irrelevant to this conversation, but putting all my passwords into one program has always struck me as a monumentally stupid idea. I use one for passwords I don't care about, I memorize unique passwords for passwords I do care about.


worshipping an arbitrarily contrived measure of password entropy makes for good security theatre, but there's a lot that goes into maintaining anything resembling actual security. How many people use "password generators" and trust that they'll come up with "random" words? What about that old saying about putting eggs in a basket?


> It stops a compromised server from silently leaking unhashed passwords

If you trust the site to deploy correct JavaScript to do this, then that's the same level of trust that they implemented password salting and hashing server side. You don't gain any robustness by moving this to JavaScript.

Your scheme is just a weak salting technique. You'd be better off with just using a longer salt and hash function.


I separately assume a salt is part of my hash function. Salts only help with rainbow tables (an admirable goal, but not my one here).

I can trust the site to deploy the correct javascript more than I can trust it not to steal passwords because

- That is auditable - it is impossible for a malicious site to do so without risking being caught.

- The HTML/JS can be served from static cloud storage that is far less likely to be hacked than the server running a DB verifying passwords.


> - That is auditable - it is impossible for a malicious site to do so without risking being caught.

Hardly. Minimization and obfuscation is trivial, and you can ensure the output is always different in order to defeat auditing. Not great for caching obviously, but 'auditability' is not achievable if the server is determined to fool you.

> - The HTML/JS can be served from static cloud storage that is far less likely to be hacked than the server running a DB verifying passwords.

Password are simply not where you want to leverage your security. If you can find a document example of a real threat that this approach would have mitigated, then I'll take it seriously.


So the malicious site can run risk assessment first and then if it thinks nobody's looking, send different code for hashing to this particular user.


This still feels vulnerable to XSS. Better would be to have browsers provide an API to do this so that $site is trusted.

The downside is not a tiny bit of computation time. It's also increased latency for the customer.


This is completely wrong. HTTPS is what secures this, not client side password hashing. If you don't use HTTPS, you can just get MITM'd to disable any kind of client side hashing.


You are wrong. Client-side hashing CAN be a silly thing, but it can also prevent a (compromised) server from seeing your password which you probably use on other websites (which is what most people do unfortunately).


>but it can also prevent a (compromised) server from seeing your password

If the server is compromised, then there is no protection of your cleartext password at all. This is because the entity that compromised the server can replace the original JS with anything, including new JS that sends your cleartext password off to their own host as you type each character.

The only activity on your part that can save you against comprimised servers is having a unique password per server (i.e., not reusing any passwords).


Not true in modern architectures, that situation only applies to more traditional file & api server combo's. If you statically serve your site with a service like s3 and have a backend running on lambda or ec2 - the attacker cannot modify the static assets and the client side hashing will prevent them from seeing the plaintext password.


Again, this is wrong depending on how the client is implemented, if updates are signed, if we are talking about a protocol, etc.


and if said "compromised" server simply decides to not supply the js that hashes the password?


Thanks for saying it. Client-side scripting can't protect against a compromised server when the client scripts are provided by that same server.


The answer is that it depends. We could be talking about protected js with SRI, signed updates with an electron client, a browser plugin or native hashing, a protocol similar to SSH that hashes the client pw, etc.


This is only true when client-side hashing is under control of the client. In a web browser, it is not. The browser will happily run whatever JS the server sends it. So if the server is compromised, it can send compromised JS, and there goes your client-side hashing protections.

An example of where it might work is in an app, where you're getting the client code from a separate channel like an app store.


It can protect you against non-malicious issues on server-side. If I recall correctly, twitter recently discovered that they were logging passwords in plaintext by accident. With hashed password you reduce exposure of actual passwords in this type of situation.


or a separate channel like another server - which is the standard in every large web application I've ever seen.


This is why server side HSMs (hardware security modules) are a thing.


Is this true? Over time I have seen user passwords end up in a variety of strange internal places accidentally, like log files or crash dumps.


See: https://blog.cryptographyengineering.com/2018/10/19/lets-tal...

About client side benefits. I'm not advocating for JS in the browser but there are benefits to doing some work client side.


Who is sending passwords in cleartext on the wire?


I think totony meant sending passwords without pre-hashing, but yeah it doesn't make sense to send any confidential information in clear text that should be sent via E2E encrypted TLS channels.

Furthermore, pre-hashing doesn't necessarily make transmitting confidential information safer, as one would argue that your client side javascript can be reverse-engineered and give the attacker more information about how you hash your data.


Really your back end should just treat password's hashed just like any password.

Ideally, if TLS was being MITMed somehow such as a dodgy root cert. It would shield the users plaintext password so it could not be used to login into other services. The problem is as soon there is TLS issue an attacker can modify the Js to just send the password in the clear. It really would require code that can't be modified by attacker. This means that there would have to be some sort of browser support. Otherwise it does nothing against the attack it would protect against.

The main benefit is offloading some computation workload on the clients machine. This could allow you to increase the work load required to brute force the password hashes assuming your database leaks. (aka increase iterations or memory requirements)

You last argument is security through obscurity if exposing how you hash makes it easier to brute force the passwords your password hashing sucks.


Yes I meant sending the password cleartext inside the transport protocol*

pre-hashing doesn't prevent an attacker from stealing your account if it can read the communication, but it prevents it from having your password and using it everywhere else where you might re-use the password or a permutation of it


Almost any http site with a login form is sending your password in cleartext. Thankfully, initiatives like Let's Encrypt have made plain http sites much less common than they used to be.

Hashing the password before sending it doesn't really help you much - the naïve approach is vulnerable to "pass-the-hash" (where you basically send the hash instead of the password as the authentication token). The secure approach involves either some kind of challenge-response or a nonce salt, but these aren't as easy to implement correctly.


Indeed. And: who is hashing passwords on the client? As this would require either not using a salted hash, or sharing the server's salt with the client, in order to obtain identical hash values for comparison. In either case that system's entire password inventory would be a lot more vulnerable.

TLDR don't do that, send passwords over SSL and use a good password hashing algorithm on the server like BCrypt.


Yep. Proper password hashing requires per-credential salt, pepper (for all credentials) and a strong algorithm (IV, iterations etc.) Revealing all those information is a leak and arguably making client side hashing less secure (by giving away a lot of parameters for attackers to attack)


NIST may say that you should use "peppers" for passwords, but nobody else does.

None of bcrypt, scrypt, or Argon2 use them and are not materially worse for it.


Yes, adding pepper is a recommendation not a mandatory step. But a lot of sites do, I.E. PagerDuty [1], paired with PBKDF2 as many apps requires to meet FIPS certification or enterprise support on many platforms.[2]

[1]: https://sudo.pagerduty.com/for_engineers/

[2]: https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet


Salts are not meant to be secret, nor are the hashing functions. You gain little by hiding them


if you're in the position to or are developing an app use argon2!


Judging by his comment, totony is.

Your password _is_ whatever you send over the wire. Doing a hash in JavaScript before sending it won't obscure the user's password from anyone who can see their traffic; it will obscure the user's password from the user.


Nope, the password is what people type in. They may type the same things at many websites. We should not care what that exactly is.

Why would you want to see actual user password if you can just not see it?

If you see a password you can leak it by screwing up in numbers of ways. If you never see a password you just can't leak it.

E.g. Twitter recently discovered that they were storing passwords in plaintext in logs, GitHub had similar issue.

Take a look here: https://arstechnica.com/information-technology/2018/05/twitt....

Of course, a hash that you will receive from client should be treated as a normal password including all good practices.


No, the password is whatever you send over the wire. If a website processes your attempt to type "password" into "5f4dcc3b5aa765d61d8327deb882cf99" before sending that to the server, then your password for that website is 5f4dcc3b5aa765d61d8327deb882cf99. That's what the server sees and how it recognizes you. The only effect of this is to make it less likely that the user knows his own password.


If user password is "passsword" he may be reusing it across 50 other websites. If you leak information that "password" is linked to "email@gmail.com" I can hack the 50 other websites. If you never knew that the user password is "password" you can not leak it and I can not use it to log in into 50 other websites. Leaking "5f4dcc3b5aa765d61d8327deb882cf99" is useless to hackers, because he cant go and use it to login into another website.

So, there are properties that differentiate "password" and "5f4dcc3b5aa765d61d8327deb882cf99", even if for the server it's all the same.


The distinction you're trying to draw vanishes as soon as this becomes a standard practice. Passwords are already stored hashed and salted. They get compromised anyway, because the data is valuable. Under the circumstances you describe, cracking 5f4dcc3b5aa765d61d8327deb882cf99 (which takes less than a second) is just as valuable as cracking a password database entry is now, because the underlying issue -- reuse of credentials -- hasn't gone away. (In fact, you're encouraging it, so it's probably somewhat worse.) As long as people are reusing credentials across multiple websites, those credentials will have value greater than that associated with their use on any particular site, and other people will put in the effort to crack them. Even when you're generating and submitting a cryptographically secure salted hash, you haven't improved on the situation now, where databases store a secure salted hash of the password.


Lots. But even those that don’t tend to send the password to the server, which is still bad.


How is sending the password to the server over HTTPS bad? What would you do otherwise? Hash it on the client? So are you not using salted hashes for your password store? That's far worse. Or you're hashing twice, the first with no salt client-side, then again with salt on the server side, which is fine, but the client-generated hash must be unsalted so is basically just the password itself: steal the client-generated hash instead of the original password, just as good with only minor loss in value (might not be able to reuse it on other sites for the victim; but actually maybe still could if you can build a reverse index of common passwords hashed using whatever algo is in use.)

And if you don't trust HTTPS to protect sensitive information, why would you send the auth cookies over it that have virtually as much power the password that was given in exchange for them in the first place?


Why would you want to see actual user password if you can not see it?

If you see a password you can leak it by screwing up in numbers of ways. If you never see a password you just can't leak it.

E.g. Twitter recently discovered that they were storing passwords in plaintext in logs, GitHub had similar issue.

Take a look here: https://arstechnica.com/information-technology/2018/05/twitt...

Of course, a hash that you will recive from client should be treated as a normal password including all good practices.


> Hash it on the client? So are you not using salted hashes for your password store?

There is no reason you can't also salt on the client. Salts do not need to be secret. The substantial constraint you outlined in your comment isn't a problem.


Literally almost everyone. (Wrapped in a TLS connection of course.)


> (Wrapped in a TLS connection of course.)

So, not cleartext over the wire then.


If the client hashes the password then the hash itself is the password. Meaning stealing the hashes passwords is the same as stealing the plain text password for which they're based, since you can post them direct.

Blizzard entertainment does half client half server hashing which is rather clever, one of the few examples where client hashing makes sense.


Nope, the password is what people type in. They may type the same things at many websites. We should not care what that exactly is.

Why would you want to see actual user password if you can just not see it?

If you see a password you can leak it by screwing up in numbers of ways. If you never see a password you just can't leak it.

E.g. Twitter recently discovered that they were storing passwords in plaintext in logs, GitHub had similar issue.

Take a look here: https://arstechnica.com/information-technology/2018/05/twitt....

Of course, a hash that you will receive from client should be treated as a normal password including all good practices.


I'm curious, how is half-hashing the password different from really hashing it?

The best protocol I know of is to derive a signing keypair from your (salted, stretched) password, and store the public key on the server instead of a password hash. Then during login, the server sends a challenge to the client, and the client signs it. The server never sees any secret material at all. Keybase uses a version of this protocol.

Unfortunately all the magical client side crypto in the world doesn't save you if the attacker can compromise your server and then send clients bad JS :p


Have you heard of this new technique called HTTPS?


It is so difficult to explore anything that Google announces without passing it through the lens shaded by their ad business model. It doesn't matter with what intention they implement a change or if those intentions are pure.

Sure, this will make the login secure by preventing credential stuffing. But doesn't this also help them get more data from users who preferred not to by disabling JS? And was this the best and only solution to tackle this problem?

I believe internally too, Google must struggle with this perception.


Chrome team members have said that preventing modern web browsers from leaking enough entropy to uniquely identify users across sessions (fingerprinting) is impractical with all of the features that the modern web provides.

This sentiment has probably lead to some at Google to think that it is justifiable impose pervasive tracking and surveillance technologies as a condition for using Google services.

Look at the new ReCAPTCHA v3 to see how fingerprinting is trending at Google.


Do you have a better solution for differentiating yourself as an actual user from a robot spammer?


Being allowed to pay for services with money, rather than being required to pay for services with your personal data. I browse the web via a proxy when I'm on public wifi, and Google is nigh unusable with how many captchas it forces you to solve to do a single Google search. Fortunately Bing and DDG still work, for now.


And for the "this is a real person" service, they really don't have to charge much. I should be able to go to the store and pay a dollar for a PIN that gets anonymized on use and lasts a long time if not indefinitely when not abused.


> Being allowed to pay for services with money, rather than being required to pay for services with your personal data

One possible big disadvantage for a site owner in letting people pay with money instead of data is dealing with taxes.

I'm not actually sure how most jurisdictions tend to classify it when someone comes to your site and you charge them to access your content. My guess would be it would be taxed the same way they tax digital goods such as downloadable software that does not include any physical components.

For such goods most jurisdictions that tax them seem to base the tax on the buyer's location rather than the seller's location, so the seller has to deal with charging the appropriate tax and filing returns for potentially a large number of jurisdictions.

If the seller is providing the content for "free" and making their money selling visitor data to advertisers then their income will probably only be subject to taxation in their own jurisdiction. It will simply be ordinary business income.


My experience with taxation out side of the US, currently Australia, is that they are ridiculously reasonable and probably more on my, that is the business owners, side that I would like them to be.

It would be a net benefit for everyone, including the US, if it's broken taxation system becomes a national emergency and finally gets fixed.


I would absolutely subscribe to a email service with a 5/10/15 dollars one time payment.


Email has recurring costs to run. They need to process every email you receive and send and therefore any email service that actually wants to continue running will need to charge a subscription fee.


A paid account is just as vulnerable to password theft as a free one.


This is only truly necessary as a mechanism to prevent automated signups.

It is less necessary, but can be useful, to prevent repeated login attempts (rate limiting works there).

For accessing a website it's completely inexcusable.


Background: I spent years developing a product that currently defends F500 websites against automated attacks. If you live in the US you've more than likely used my software this week without knowing it.

Rate limiting is completely ineffective in preventing credential stuffing attacks from determined adversaries. The challenge is not brute-forcing, but credential leaks and password reuse. Attackers have access to vast seas of IP addresses and in my past life doing the defending, we would see an IP address involved in automation twice, and then it would go away forever.


I've covered this in a reply to another one of your comments, I think, so won't bother here.


> This is only truly necessary as a mechanism to prevent automated signups.

Only if there isn't some shared knowledge between the website and prospect.

For example some niche websites that I use have locally-based challenges such as 'how many engines can be fitted to a Boeing 747-400'* or a short mathematical algebra to solve.

Both approaches work without Javascript and without external support, just with a two-column table and a trusted group of challenge-creators.

* it's more than four


Rate limiting login attempts means I can prevent you from logging in indefinitely by pushing your account into the rate limited status.


Only as long as you keep up the attack.


But an attack that keeps rate limits saturated doesn’t require much bandwidth (you know, because rate limits) and is therefore easy to keep up. Most services would consider it unacceptable if a script kiddie with a botnet and a generic list of usernames were able to prevent even 1% of their users from logging in.


So your solution is "sorry, come back later when the attacker hopefully stops"?

I can keep you out of your account indefinitely with a curl loop and a rotation of proxies?


There's a solution here that's being used in email. The user has to provide a proof of work.


Where is it being used?

Because the reason why https://en.wikipedia.org/wiki/Hashcash wasn't useful for anti-spam after all is because attackers have access to the cheapest compute in the world: botnets and devices that aren't their own.


Honeypots take care of most basic bots. If you need captcha, you can use Math Question like in http://random.irb.hr/signup.php You can also add extra security with time elapsed between loading the page and form submission, if it's less than 5 seconds it's most likely a bot.


That's why agent randomizers were invented


What a bunch of, excuse the language, paternalist fear-mongering bullshit. Of course Google wants you to enable JS, because it allows them to monitor and track everything about you more easily. Twisting it into "this will make you safer" is sad and undeniably repugnant.

I've noticed a lot of other sites practically begging you to "enable JavaScript for a better experience", when all their content is static text and images. I fell for that once, a long time ago --- enabled JS briefly to see what the big deal was --- and was promptly bombarded with popups, slide-overs, and even more ads. No thank you, I'll keep it off.

For many years I used IE6 with JS off (and a whitelist for a very, very small number of selected and highly-trusted sites. IE has a "security zone" feature which to my knowledge no other browser comes with by default.) Not a single malware infection, and that's despite often visiting the... shadier parts of the Internet.

Browser exploits are almost all JS-based, and even the few that aren't, are in practice deployed using obfuscation involving JS, to make analysis and detection harder. Turning off JS effectively kills those risks as well as other annoyances (blocking right-click, text selection, injecting crap into copied content, etc.)

IMHO the advantages of not having JS on by default are underrated. I'd consider ~99% of the sites I come across when searching for content to not require it at all, which certainly is a stark contrast from the "you'll break the Internet!" screams of the JS-advocates and "web designers". Sites that "break" from not having JS, and which aren't specifically "appsites" but mostly content-sites, are not worth visiting anyway.

Perhaps it's time to raise a counter-movement, and add (via <script> tags, of course) "you have JavaScript turned on in your browser, this is a security risk! Click <here> (link to appropriate page with all the risks and how to turn it off) to learn more." Or "You have JavaScript enabled, please disable it for a better experience."

If a certain vocal minority managed to demonise Flash (another powerful technology that had major uses) to almost completely kill it, maybe the same can happen for JavaScript?


> Browser exploits are almost all JS-based, and even the few that aren't, are in practice deployed using obfuscation involving JS, to make analysis and detection harder.

Go take a look through Pwn2Own. Most browser exploits do not involve JavaScript. JavaScript can be a delivery mechanism for a certain class of payloads, but it's not the substantial weakness in browser vulnerabilities (as opposed to web application vulnerabilities).

Given that this isn't familiar to you, I'd recommend you reevaluate how confident you are in the JavaScript-focused defenses you've outlined in the rest of your comment.


The security bugs found during Pwn2Own are usually kept under embargo, and only published later by the browser developers. Here's an article about a Chrome bug found in 2017's Pwn2Own:

https://www.computerworld.com/article/3186686/web-browsers/g...

and it does involve JavaScript. Could you provide some recent examples of the sort of browser exploits you mean, that don't require JavaScript? (I assume you're not including exploits in extensions/plugins like Flash or PDF viewers.)



Sure: https://www.cvedetails.com/vulnerability-list/vendor_id-1224.... Also look at Google Project Zero writeups, and similar CVE details for Firefox and Safari.

Some are related to JavaScript (notably, it's hard to exploit V8 if the browser isn't processing any JavaScript!). But the vast majority are memory corruption and sandbox escape issues.

Disabling JavaScript insulates you from a nontrivial - but nontheless minority - subset of browser vulnerabilities.


But the vast majority are memory corruption and sandbox escape issues.

...which require JS to exploit. (What is a "sandbox escape" if there isn't code to... escape it?)

I went through the 50 vulnerabilities on that page and looked at the nature of them and inspected any PoC code if any. This is the results:

PDF,JS,JS,WebGL(JS),JS,CSS,JS,JS,extension,HTML(but PoC needs JS),JS,PDF,PDF,MIDI(JS),AppCache(JS),PDF,JS,UI,WebRTC(JS),JS,JS(speech recognition!?),JS,JS,JS,JS,JS,JS,JPEG(rendering uninitialised memory --- not exploitable without JS to read that data),JS(WebWorker),HTTP/SSL(!),SVG+JS,UI,JS,XML(!),SVG+JS,JS(audio),Fonts(actually Windows font renderer bug),UI,??(no details available),IndexedDB(JS),WebGL(JS),UI,JS(WebSockets),NaCl(extension),extension,PNG,WebGL(JS),?,JS,WebGL(JS)

That's 32/50 confirmed to require JS to exploit, and only 3/50 stood out as being "visit a page with all plugins/JS/extensions disabled, and still get pwned", of which 1 is actually a Windows bug.

Disabling JavaScript insulates you from a nontrivial - but nontheless minority - subset of browser vulnerabilities.

Looks more like a majority to me.


Of the first 28 vulnerabilities listed on that page (representing the past 5 years of CVEs for Chrome), I count only 7 that don't mention V8, JavaScript, PDFs, plugins, or "unspecified vectors".

If someone is choosing to disable JavaScript in their browser, I think it is reasonable to expect they are also disabling plugins for other formats like Flash and PDF.

We might disagree about whether to include disabling plugins as part of disabling JavaScript, or how to count the "unspecified vectors" CVEs, but the clearest minority is this 25% of vulnerabilities that a JavaScript (and plugins) disabling user would fail to insulate themselves from.


JavaScript can be a delivery mechanism for a certain class of payloads

That's exactly what I'm saying --- in the real world, exploits tend to be wrapped in JS even if they don't technically need it.

but it's not the substantial weakness in browser vulnerabilities

Then what is? My real-world experience also correlates.


> Then what is? My real-world experience also correlates.

HTML (non-JS) parsing, memory corruption, image processing, filetype validation and process isolation/sandboxing.


Google's tracking everything you do on _their properties_ to begin with because they're a marketing company. Use DuckDuckGo if you don't like that. The pearl clutching in this thread is unreal.


[flagged]


"Pretty soon your every action will be tracked throughout the Internet"

You're saying soon the computers that make up the internet will gain the capabilities of IBM machinery of the 1930s?


This seems reasonable to me. JavaScript is being used everywhere and most people are okay with it. As a business decision I don’t see why Google would support such a small edge case. That being said, if they feel like explaing why they should maybe try a little harder. I assume most people with JavaScript turned off might appreciate more details so they can decide how to respond to the requirement. Then again, their current approach actually seems reasonable to me.


99.9% of users are okay with JavaScript.


99.8% of users don't know what javascript is.


Thus only 0.1% of users know what javascript is and it's okay with it. That is an interesting random stats, seems pretty realistic. I at least liked it.


Thats the wrong take on the stats. It means among the ones who know, half of them dont want it. This is more telling that looking at the 0.1%.


I think you’re confusing “numbers some guy made up” with “stats.”


No, I'm just following on the assumptions of the previous posts. Of course the numbers are fictional.


Then if we take this 0.2% as representative (and 6 million out of three billion users probably is), it would be fair to say that this 50% split would scale if more people learned about javascript.

Google will be in hot water if humanity ever decides to take on javascript.. assuming the source for those states aren't someone's ass.


>It means among the ones who know, half of them dont want it.

Which, of course, isn't true to begin with.


it would mean that, if it's a real stat and not just something that was invented on the spot.


For about a month or so I tried browsing with JavaScript turned off but gave up after having to modify settings for just about every single site I visited to get pages working, often with them silently failing in the background leaving me wondering what was going on. Sometimes I'd get halfway through a payment transaction before realising that the lack of JavaScript was preventing it from going through and then changing settings and reloading the page would break my session. Long story short, I gave up on this and have succumbed to running JavaScript for all sites again.


I browse with javascript blocked by default and also often need to enable a script for proper functionality. However when I enable that one script, I will almost always see numerous other blocked scripts that can remain suppressed without a problem for me. Most of these are related to tracking/advertising. For me blocking the rest of the scripts is well worth having to play around a bit when I first visit a new website.


I'm on ~5 years of temp-whitelist only for JS domains. I have acquired a third sense for which CDN and assort domains are really required for what.

Sure, it does take some back and forth testing, and some sites like wix.com sites are terrible no matter what, but in general it's a better browsing experience.


My favourite are pages that embed videos using a huge collection of different third party embedding tools. It usually takes 2 or 3 rounds of whitelisting to get the video to even show up, and another to get it to play.


I went through the same pain but took an opposite approach. I've just started using the internet less. I allow us for Amazon and Gmail(which I am moving off of), but for 99% of the rest of my personal use I only read text only sites. I guess I have internet traffic for Spotify too, but that's an app so I've already let them onto my system


I use two profiles of browser. One with uMatrix, which is allowing only first-party js, other without addons for special cases like banking. I do not browse random pages with the second profile, just those which I decided to be trusted ones.


You can add exceptions to websites you visit you know..


try umatrix


Umatrix is build around reactionary settings, and once you're reacting it's usually too late to fix a failed payment attempt.

I've faced the same problem a lot, and what I really need is a way in ublock/umatrix to turn it off for the current tab, even as it jumps across several domains. But this doesn't seem to be an option. Am I missing something?


Same exact journey for me.


You can disable Javascript ... if your time has no value.


Amusingly, the article is perfectly readable with javascript off, but with only first-party js allowed, it's blank.


Ohh that is interesting. Anyone know why that is?


Blogger (which is what hosts this article) includes a copy of the whole article within <noscript> tags. If you completely disable Javascript, the browser shows the content of these tags, so the article is visible. If you have Javascript enabled, even if all domains are blocked, the content of these tags is hidden, so the article show as a blank page. You have to unblock the third-party domains where Blogger hosts its scripts to make the article appear again.


"But, because it may save bandwidth or help pages load more quickly, a tiny minority of our users choose to keep it off"

Yeeaaah Google. Thaaaat's why we don't load JavaScript from you ;)

I get it they can't make this opt-in and expect many to use it, but it'd be nice to see an opt-out be made available.


When Javascript becomes the new pillar of security something has gone terribly wrong.

Background: I love building Single Page Applications, Progressive Web Apps and have JS always enabled. So no hate for JS in general, but when your security depends on the correct evaluation on the client side, you are starting a dangerous cat and mouse game.


They've been playing a dangerous cat and mouse game the whole time — I'm sure the old tools are not being discarded; just added to.


0.1% of Google's users is still quite a lot of users, is it not? Somewhere over a billion, perhaps, which would mean 1 million+ with JS off.


I'm gonna wager that 0.1% of users who don't or can't run Javascript aren't very valuable to Google.


0.1% of roughly 4 billion would be about 4 million. Somewhat more in pageviews.

Of course, this fails to account for self-selection, a beehaviour has encountered before, in which increasing Youtube performance slowed average load times ... because users who'd previously found the site intolerably slow now found it slow, but tolerable.

(I've looked for the story, can't find it.)

Much as those who prefer to avoid use of Google won't appear in site use metrics. And enforced JS could well prove a deterrant.


You stopped reading the single sentence in the comment you responded to about half way through.


I tried to live with javascript disabled by default but gave up after two months because all I did was white-listing every page I opened. Gave up on my own side projects too. Building something that works with and without javascript is just to much work for me and it becomes ugly quickly.

I still think modern websites over use javascript too often and should use markup over code whenever possible.

And of course no one wants to go back to iframes to load dynamic content. Or should we?


try again with noscript allowing main page by default but not 3rd party.


I think that most of the annoying (but non breaking) javascript is already covered by ublock.


Same thing with littlesnitch


I'm genuinely curious who actually browses the web in 2018 with JS disabled, though. Wouldn't 99.9999% of the web basically break? Like, if you do, do you only stick to a few basic sites, or?


I browse with JavaScript disabled (NoScript) with a few sites that I care about whitelisted. Most web sites work fine although their layout is sometimes not what the web designer intended. Some sites don't work at all... for those sites I generally hit the "back" button. When I enable JS to view "JS required" sites, I usually hit the "back" button before it finishes loading anyway so why bother.

NoScript makes a big difference with my laptop and a HUGE difference with my phone.


I browse same as you: Firefox & NoScript, very few sites on by default. My experience matches yours with one other thing you didn't mention:

When I get a blank page, I do:

   View
      Page Style
         No Style
usually fixes things quite well. E.g. seattletimes.com is blank by default but turning off Page Style gives me both text and images. Just not formatted very well, but I'm willing to accept that tradeoff.


It's a pretty extreme practice, and only something a tiny fraction of very technical users will do. I think in general the people who turn off JS do it in a way that they allow it on sites they really have to use that really don't work without it.

Asking this question on HN will have an extreme selection bias, sort of like asking "who really thinks aliens are vising Earth and abducting people?" in the Roswell UFO museum staff meeting.


I have JavaScript disabled by default (incidentally I also disable cookies). I do this for page load speeds, security from malicious JavaScript, disabling of most advertisements, and privacy from ad tracking networks. Best practices is to serve static content and then progressively enhance that content with browser features like JavaScript. Unfortunately many sites disregard this. I routinely encounter things like an entirely blank page, missing images, broken forms (of course if I am logging on to a site I will re-enable cookies). If I think the site is trustworthy enough and I really am interested in the content I will whitelist that site. Otherwise I just hit back.


I used to think the same thing about people who browse with auto-loading plugins. Then browsers caught on and started click-to-play'ing them.

Javascript is the new plugins (so much host OS functionality exposed now) and it requires a new click-to-play. I'd like to think they'll eventually integrate something like NoScript into the browser as a standard. This free for all code running is too much now that the browser is an OS.

But it's difficult to get browser companies to understand something when their income depends on not understanding it.


You're misunderstanding: you browse with JS disabled by default. Random sites shouldn't be running programs on your computer. If you trust the site, you whitelist it.


You're acting like "running programs on your computer" is a bad thing. It's not.


Well I don't want to run EVERYone's program on my computer, I want to run MY programs on my computer.

Most websites are made of text, I usually want the text, not whatever program they're running.


Running untrusted code on your machine _is_ a bad thing though.


Putting security to one side, disabling JS is an easy way to performance optimise their pages on their behalf.


[flagged]


> and they work

... They break about as often as they work. [0][1]

There are a lot of vulnerabilities that appear in web browser protections, and almost all of them get exploited via JavaScript. Running untrusted software is not safe.

[0] https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=chromium

[1] https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=firefox


This is factually incorrect. It is entirely safe to run JavaScript in your browser. Your definition of "a lot" doesn't fit this conversation, because it doesn't represent constant and regular vulnerabilities.

Browsers are very safe. Not perfectly safe, but very safe. Thinking otherwise is paranoia, just like not running JavaScript by default in your browser is paranoid.


No, they don't always work. We have bullet proof vests but it's still not safe to put one on and have someone start shooting you.


[flagged]


>Do you wear a bulletproof vest every time you go outside? No? Interesting.

You realize the bulletproof vest is the sandbox, right? You've just made an argument against enabling all javascript.


No, it's not. The lack of availability of guns is the sandbox, the cultural more of not killing people is the access controls.

"Bulletproof vest" in this analogy is "extraordinary activity designed to keep you safe".

You don't "wear a bulletproof vest" (a stand-in for "take extraordinary measures") to prevent yourself from "getting shot" (a stand-in for "thing that happens very rarely").

In other words, your assessment of the risk of JavaScript running in a browser is higher than it actually is.


I'm not sure you understand how many entities are trying to get into your computer to track you/steal data/mine bitcoin/etc. These are active attacks against your computer every day if you are a regular user clicking viral shit on Facebook, random ads for things, etc.

Look at how many 3rd party JS libraries get loaded from remote sites for metrics, frameworks, tracking, ads, etc for something like a newspaper site. There are probably 15 servers involved, most of which run by companies with limited security expertise (if any) so frequently they end up compromised to inject garbage into visitors' browsers.

There is absolutely no lack of availability of guns or people attempting to use them on you in this analogy. The bullet proof vests are good, but that doesn't mean there isn't someone attempting to shoot you in the chest every time you go out.

Look at spectre/meltdown. Arbitrary code execution is not safe. Not in a browser sandbox, not in a kernel namespace, not in a hypervisor. You're protected from common thugs most of the time, but your bullet proof vest will fail if someone with a powerful gun takes aim.

There is a reason the CIA doesn't use the same AWS servers as the public and it's the same reason you can't view Facebook from inside a secure military network. Sandboxes are just a protection mechanism from well-understood attacks, they don't provide anything near the level of real isolation that Internet companies would love you to believe.


And I'm not sure how secure I am against those entities. There are active and unsuccessful attacks against my computer every day, and none of those attacks rely exclusively on running JavaScript in a browser sandbox.


Even setting aside things like rowhammer and spectre, something like half of the browser vulnerabilities need Javascript to be exploited.


I'm not sure if it quite qualifies as browsing with JS disabled but I run UMatrix with the default settings set to disable all scripts on a site. Then I've set some common domains I trust that greatly improve usability (things like ytimg, paypal, some common 3rd party scripts).

I'd say I browse the web normally. Plenty of sites load normally with scripts disabled (and may even be more pleasant to browse). Maybe a 25 to 40% of sites I hit I'll happily enable 1st party scripts and a couple of known CDNs that'll will be enough to have the site working. For I'd say 5% or less of sites I'll have to actually engage and think if the calls to a 3rd party api are required or if the cloudflare/amazon are actually part of the domain and there for a good purpose or may be being used to serve up nefarious code.

One of the things to consider about the above numbers is that they only really apply for the first time I visit a site. With exceptions for when a site changes how it's implemented (or a3rd party script calls a new domain) which is nice to know anyway. (Or when the tool I'm using to block scripts changes (3-4 times over the last 15 years + another couple for not transferring whitelist data to new machines)).

I wouldn't really recommend my approach as something to get into for 2 main reasons. Firstly because the web does work much nicer when you just turn scripts on and say 'to hell with the privacy and security concerns'. Secondly there is a fair bit of otherwise useless knowledge built up over the years of doing this that mean this set up is significantly less onerous than it would be for someone who only just starts doing it and needs to look up every 3rd party domain.

Having said that I would highly recommend that you should install some script management with the default more permissible settings (rather than my more draconian ones) as it breaks very little, takes little effort to fix most of the things it does break and will stop well known tracking and malicious sites.


> Firstly because the web does work much nicer when you just turn scripts on and say 'to hell with the privacy and security concerns'.

OTOH, web works faster with uMatrix. You can probably get an extra couple years of your computer if you're aggressive about disabling ads and superfluous JS.


Most websites work fine, you have to be truly incompetent to make a text+image heavy website that doesn't work without JS. Web apps are a different story.

The ones that do break tend to be the ones invading your privacy.


I browse with 3rd party frames and JS disabled. Not as extreme as all JS so it doesn't break most things. Occasionally I need to whitelist a CDN or related domain for that site to get things working.


I have umateix installed and set to block 3rd party scripts. Almost all websites work fine or even better like this but sometimes I have to enable a CDN to get the website to work.


I do.

Most sites I use works fine without js. The few sites I need, that require js, are accessed from isolated, virtual machines dedicated to just that site. The sites I don't absolutely need, that require js, don't really exist in my world. That was a choice they made.


> Wouldn't 99.9999% of the web basically break?

What leads you to a number like that? I can tell you that you're wrong, but I'm much more interested in how you got there.

Youtube needs javascript, and images or sometimes body text on badly-designed blogs need javascript. Most sites are fine. HN is fine, for the most obvious example.


I do sometimes. Mostly to read articles on sites with a poorly implemented paywall that simply doesn't work with JS disabled and shows the whole content. As a bonus often half of the ads are not served as well.


I'm not really sure how it works if you don't want to use JavaScript, do you then whitelist sites that you're ok with it?

Wouldn't the solution for folks who don't want to run JavaScript just to whitelist it for Google?

Maybe I'm missing something but that seems pretty simple, would let Google do their thing and the folks who don't want to run it run it just for Google and places they want to.

If someone already doesn't trust Google, I gotta think they abandoned their products entirely already. Google's data collecting is so complete that JS or not, same objections.


> pretty simple, [] let Google do their thing

Unfortunately El Goog is now a de facto monopoly both for makers (organic traffic) and users (AMP, discovery, federated logins etc).

I'm not angry at Goog for winning, they did it by (mostly) being 100x more awesome than the competition and surfing the network effects into the endgame. But here's the elephant in the room: outside walled gardens, Goog runs da tubes, and they're not as fluffy as Mr Cutts would have you believe. Mandatory JS and AMP are the not-even thin end of the wedge.


It certainly is it's on monopoly, but at least for personal use folks who don't like JavaScript can avoid it if they really want.

Just generally this decision for logins and javascript seems pretty... not a huge deal IMO for folks who aren't a fan of JavaScript, whitelisting google seems like a pretty simple choice.


Yes, practical.


As others have mentioned, nearly the entire internet breaks without javascript - it's a sad state of affairs.

One protection I like to use is to disable javascript that is loaded over plaintext http - this breaks nearly nothing, and is easy to do via Chrome settings: https://i.imgur.com/NRVg5Xf.png


I browse without javascript all the time, sometimes i need to switch to another browser to buy something from some random site, but generally its just a faster better internet with very dodgey sites self selecting themselves out of my usage.

If there is one or two sites i really want to use in my main browser that need JS i can white list that site - but only exactly that site.

I am not an expert but it seems a better experience to me and i don't really think it make me less secure?


Chrome will always block an HTTP javascript call by an HTTPS page. So given the growing prevalence of HTTPS, I think this setting is probably doing less and less for you.


And therefore, this setting is safer and safer to enable:)


This is a sad state of affairs. However, notice that once you are logged-in into gmail, you can still disable all javascript and go to https://mail.google.com/mail/h/ for a somewhat reasonable user experience.

It is best to run gmail on a separate, private browsing window, so that it does not interfere with normal web browsing (i.e., make sure that other websites such as google search cannot see that you are actually logged into gmail).


I don't get the same behaviour as what is shown in the screenshots of the google blog. If I go to Google, disable JS and click on 'sign in' I go to an 'account chooser' page. From there, no matter what I do I end up being redirected to that same page. Not great UX from Google if others experience the same as me.


The post states:

>"When your username and password are entered on Google’s sign-in page, we’ll run a risk assessment and only allow the sign-in if nothing looks suspicious. We’re always working to improve this analysis, and we’ll now require that JavaScript is enabled on the Google sign-in page, without which we can’t run this assessment."

Is the idea that an actual browser will be able to have a fingerprint whereas a bot would not? Is the check for a javascript a way to short-circuit responding to a non-browser based request? It wasn't clear to me.


I don't want google to protect me in this way. Can I opt out?


Don't use Google?


That is unfortunately not a practical option for many people whose school or workplace forces use of GMail.


Good luck with the introduction of Captcha v3.


We can argue back and forth about the pros and cons of requiring JS. But one thing we can't argue about is the loss of security for political dissidents and other non-protected groups who access Gmail or other Google services through systems like Tor.

No, this isn't Google's problem, but it illuminates what kind of purpose Google believes its products should serve, and what kind of people it should serve.


I don’t remember the last google story that made me think “wow, what a cool idea.” That used to be the norm. Now it’s “ugh what happened to y’all??”


JavaScript is fast becoming, if it hasn't already, the next rich GUI framework... It was supposed to be just touch up on HTML, now you can't sign in anymore w/out JS enabled. What if I wanted to do a rich client in C# or Qt? Do I need a JavaScript engine to sign in?


Yes, plenty of apps allow you to sign in with Google -- they open up an embedded web browser for the sign-in dialog and oauth permission screen.


Google is not in my internet hot path anymore. So, good for them. DDG & ProtonMail work just fine.


How long until you have to sign into Google to search?

They already pop up messages trying to trick you into logging in to "adjust your settings" or "improve your privacy" or some such.


So I'm wondering, all these automated detections and CAPTCHAs... how does that tie in with automated decisions being made about you under GDPR? They're trying to protect us from bots, but those decisions are also made by bots. What if their bot makes a mistake? Does that count as an automated decision under GDPR, for which I should be able to ask a human for a second opinion?


i have javascript disabled mostly to save bandwidth against nasty ads. Most sites that requires JS i usually ignore if possible (seriously why do some content provider think its ok to require JS?). For certain things like soon google login i do just whitelist them. But by default if i do not trust your site enough and/or its not that important in the end ill just close my tab


Thank you Google for making me investigate alternatives.

I am serious, I've been using gmail since 2004.

I realize some day my main account is going to be permanently inaccessible just because I will be in the "wrong" country or will be using the wrong browser in the wrong cafe.

There should be no need for Javascript with proper 2FA.

Your algorithms already make me solve ridiculous captchas "some" of the time.


This reads like an April fools joke.


Yah, but what if I want to be able to log into my account using a script? I know I'm in a tiny minority, but I have on several occasions had to use selenium to automate actions on a web app. For this reason, captchas are similarly annoying to me.


If you want to automate your actions, you should get an API key and use the official APIs.


I wish official APIs were available for everything I want to do.


Isn't that what Oauth is for?


no hax0r, no soup for you!


What many of the comments seem to be missing is that you don't need to enable javascript on all google sites, only the sign-in page. With that in mind, I don't think there needs to be as much concern as we're seeing here.


This kind of apology for a problematic change misses how allowing small changes eventually normalizes the change[1], allowing another similarly "small" step to be taken in the future. Today it's "just the sign-in page"; tomorrow it will expand to cover something else because "it's already something you're used to using on the sign-in page".

It's rare for major changes to arrive all at once. Small steps are taken, with each eventually becoming the new normal that allows another small step to be taken[2]. This isn't always intentional - each small step can seem rational at the time, in isolation.

[any replies complaining about "slippery slopes" will be ignored; the normalization of deviance does not mean change necessarily will follow, just that normalizing incremental changes allows people to support changes without noticing the larger picture]

[1] https://en.wikibooks.org/wiki/Professionalism/Diane_Vaughan_...

[2] https://en.wikipedia.org/wiki/Overton_window


Is there an alternative to gMail? Just tried Protonmail and Fastmail, both require JS.


Yeah, use mutt or thunderbird.

Conveniently those both work with either fastmail, gmail, or any other popular IMAP-compatible email provider.


ProtonMail requires JavaScript for client-side encryption. You can use a non-web client, though.


Protnmail requires JS, or offers an app (Android/IOS). But it doesn't infect uncountable other websites.


Okay. No more Google account. So that means I miss out on... What exactly?


Everything in the Google product suite, clearly.


The Play store if you're on Android


I don't have a Google account on my new Android phone.

I get my apps from F-Droid, or I write them myself.


> But, because it may save bandwidth or help pages load more quickly, a tiny minority of our users (0.1%) choose to keep it off.

That's a gross misrepresentation, isn't it?


I find it kind of humorous that the "mobile demo" seems to have been made on a Mac, complete with a cursor, and then resized to have a mobile border around it.


If you're an Amiga user, Google is now dead to you. As far as I can tell, none of the browsers available for the Amiga platform support JavaScript.


I try hard not to be a luddite as I age, but this level of automation and machine learning is so concerning.

It is SO frustrating to accidentally appear as a bot and get stuck at the mercy of an automated system. I was on some random site the other day and spent 3+ minutes solving Captchas until it finally let me through. I thought I was losing my mind. I don't spam, I don't automate queries, I come from an IP that has two residential users (netflix, xbox traffic dominate everything). Who knows what I did to offend the algorithm.


I was getting a persistent but unhelpful error message from an airline website while trying to change my reservation shortly after midnight. I assumed at first I just hit the maintenance window and should try again in the morning, but now I am reading this thread and wondering if it's a deliberate obfuscation that it deemed my connection suspicious.


Google captchas appears (to me) to be designed to identify individuals, not to separate anonymous users from machines. Google want to know exactly who you are, when they harvest your visits and behavior on almost all sites on the Internet.


I don't understand what this has to do with the fact that I repeatedly am having to prove that I'm not a bot despite doing nothing that ought to be triggering Captcha checks.

But hey, I guess mentioning this simple observation is worthy of numerous downvotes here. Always lovely.


I get this constantly. Its the new digital ghetto google has created.


Goodbye google. Wanted to leave, but was lazy. Thanks for focing my hand, and those of thousands more.


Why would anyone who cares even a tiny bit about security or privacy ever want to sign in to Google?


This is unfortunate. On all my phones, I find the most annoying malware is spread via javascript. I'll get onto a page I read a few times a day, and suddenly, thanks to ads, I am served some javascript malware that attempts to tell me "my PC is infected" or other such garbage.

I grouse, turn off javascript, clean my caches, and then everything is copacetic for a bit.

If this means that, thanks to google's decision, that I must in no uncertain terms, have javascript on in order to interact with their product line, then I need to weigh risks and benefits. Not proclaim loudly that I will run away from the google ecosystem, as many are likely proclaiming in this forum.

But weigh the risks of using products which require me to be vulnerable to attacks, versus alternatives.

From the viewpoint of google, you don't want your customer to ever be in that position. As it is an arms race against bad actors, and forcing users to effectively lower their defensive posture means you really ... REALLY ... need to up your game on value for this to be even net neutral relative to previous state.

Given how completely horrendous google's customer support is, and how incredibly hard it is to administer basic stuff (it took me more than 1/2 hour! to search and find out how to close an unused business g-suite account, which I couldn't do, until I manually disconnected from a marketplace product ... where this disconnection was also hidden ... google doesn't do ux/ui worth a crap, and has no real support for smaller users), I am not sure this is in their best interests. I am sure there is a product manager trying to pull them back from this ... somewhere ...

Or at least I hope so.

Basically the risk calculation is when where you look at something holistically. Is using the google universe worth the risk that universe forces you to accept?

Part of the reason that facebook is in user count freefall, is this inherent risk. You are volunteering info that advertisers are dying to get, and they are the ones paying facebook. You are a packaged product, you can be microtargeted, and various bits of self-serving posturing over election non-interference and account termination aside, they haven't quite grasped that this intimate relationship brings substantial risk for participants.

I've dropped messenger from my phone, and am about a few months away from dropping facebook from the phone. I don't need it there, and the risk to me is far higher than the "value" it brings.

Back to google. You don't want customers thinking this. You want them happily and securely playing in your garden. You don't want to force them to go out without their armor. Turning off javascript strengthens that armor. Forcing it on strips users of their most important armor.

Put more simply, bad google, bad.


I'm getting tired of Google dictating how the web should work. That's the job of standards bodies.

Google is increasingly taking the place of overbearing overlord that Microsoft embodied in the 90's and early 2000's.


Google is dictating how signing into their account system on their properties works, not login forms for _every single site_.


Google's policies get exported.

For a long time it was OK to drop email that came from a server without rDNS. Then gmail started allowing such emails, and then legit sites started sending email from servers without rDNS.

In this case the export mechanism is Captcha v3.


So you're saying you're getting tired of everyone else copying Google's behavior?


You're right. My comment is out of place in this particular instance. But there are plenty of other examples where Google does have too much power and influence.


Google is quickly becoming the login form for every website. And even if the website has its own login it will always have recaptca


I wouldn't blame Google for this phenomenon, though. They provided a solution, others were quick to jump to and use said solution. Might be better to complain to the sites following the herd then?


Is there a standard relating to whether JavaScript should be optional?


According to the WHATWG:

https://html.spec.whatwg.org/multipage/scripting.html

"Authors are encouraged to use declarative alternatives to scripting where possible, as declarative mechanisms are often more maintainable, and many users disable scripting."

"Authors are also encouraged to make their applications degrade gracefully in the absence of scripting support."


Just remember that CSS3 is already Turing complete.


Death to n o s c r i p t


I agree. I instantly IP ban anyone using noscript on my site.


If you're serious, I would much appreciate an explanation for why you would do such a thing.


I love JavaScript and hate NoScript.


coming soon: we noticed you are in private browsing mode. to keep your browsing secure, please sign in with history enabled.


Nooo, google wants you to think you are in private browsing mode so you share more secrets with them. They will just collect it all no matter what you do.


The correct, non-editorialized title is: "Announcing some security treats to protect you from attackers’ tricks"


Both are editorialized. I think a better, more neutral title would be "Google now requiring JavaScript to be enabled to sign in".


I'm sure this is on me, but I'm genuinely not sure what the difference is between your suggested title and what was used. (Obviously, I can see which words you changed, but I'm missing the significance)


The title has been changed recently; the old one was significantly more inflammatory.


AI should be able to defeat it by copying real users, it would require a full browser running tho.


Talking of protection.


How is this nonsense upvoted on hackernews..?


We detached this subthread from https://news.ycombinator.com/item?id=18350261 and marked it off-topic. Could you please stop posting angry meta muck like this?


It's surprising to me how many users on HN can only think as far as their own browser.

Really the question here is not "what's the harm of JS across the web" but rather what is the specific privacy cost of running JS on a sign in page and what is the security benefit of the same.

The worst case cost of JS on a browser is that you get a drive by download and your endpoint is owned. This seems unlikely on a Google domain.

The other, more normal, case is that a user is concerned about ad tracking. Providing an ad tracker on a sign in page seems pretty lame, and I'd be surprised, again, if Google was doing that.

The security upsides are likely several: anti-automation, anti-phishing, and an opportunity to track state-level adversaries who target users' Google accounts.

I don't know how others weigh these factors, but to me it seems entirely obvious that this is a good idea. Could the blog post have better laid out this case? Sure.


I'm sure tracking everyone and controlling exactly what they can do will make them safer. Why don't we put surveillance cameras everywhere and make them record 24/7 too? </s>

Authoritarian ideology like this is what turned me off the whole "security industry" years ago.


Google already controls exactly what you can do... on Google.

You're arguing against ubiquitous surveillance, when the OP is arguing for company surveillance of the way in which you interact with their login screen, on their site.


Adding "</s>" to the end of a sentence doesn't make it any less of a strawman.


Why would google not at least have an incentive they'd have to work against, to add tracking into their sign in pages? They make all of their money off of ads and they do that by tracking people to target ads.

You might as well say people shouldn't take precautions swimming around sharks, because it's rare and be surprising if they attacked


> Why would google not at least have an incentive they'd have to work against, to add tracking into their sign in pages?

You're signing in. That's literally asking them to identify you across pages so you can have access to them.


No it isn't; that's just a side effect of the bolted-on implementation of cookies.

Signing in is literally asking them to identify you on one page so you can have access to that page.


ha well turn off javascript too and you won't be able to use even hackernews, so good luck with that.


Hacker News works perfectly fine without Javascript. In fact, I don't think I've ever enabled Javascript here on Hacker News.


> Hacker News works perfectly fine without Javascript.

keyword being "too".

> In fact, I don't think I've ever enabled Javascript here on Hacker News.

I mean, that's super, but it's just one more point in the column of how HN's population is out of touch with any regular person. It's really annoying to wait for the page reload on even a fast network, and good luck finding your place again if a thread is even moderately busy.


I leave JS off on this site and vote by middle clicking. There is no wait for a page reload.

In any case, you don't have to search for your place again, because the browser reloads with that comment on the top.


Ctrl f. But yes hacker news doesn't have the greatest UI.

Back to your main point. I don't think it's out of touch. It's different information or values. Do you think it's not annoying for me to not have JavaScript and have slow or broken pages? I know and feel the same thing, and I imagine anyone using no script does as well. It's just not worth it to me to let privacy be degraded for some speed.

I think this is due more heavily to being more informed about the situation than the average person, rather than different values. When Facebook starting getting in the news about all their privacy violations, Facebook MAU went down and the number of accounts deleted went up. That indicates that once people are aware of what is going on, they make similar choices about interacting with the companies.

The difference between many people on hacker news and the average populace is that we can both install something like noscript, and have a rough idea of how to interpret the options on it.


"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


This makes perfect sense for Google. This will objectively greatly increase the account security of their users. Remember 99% of Google users are not at all like people on HN. Google is helping the 99% at the expense of the people paranoid about tracking - but if you have a Google account, you've already lost that battle.

This is a completely reasonable business decision and trade-off.


Because unfortunately, way too many of us depend financially upon poor security practices.


I think this is a really braindead argument. Of course users are going to use javascript more, the more the internet demands it, the more people will be willing to allow it.

When I first started using noscript there were few exceptional sites which didn’t work and I didn’t bother with them after, if that was the majority of sites, I would probably just disable noscript.

The idea that I _have_ to let your site run code on my machine is only laughable when it’s an exception, the problem is the industry. And google is telling the industry that it’s ok.

edit: yes, I understand; I'm over the hill and you all absolutely love shoving javascript down peoples throats, I get it. But please take a moment to consider... not? maybe you have a financial incentive to avoid thinking about the ramifications you inflict on others, maybe you're paid to be a JS dev. But really, if you're making the web worse, I will not cry for being downvoted, just know that I hate you. :)


We detached this subthread from https://news.ycombinator.com/item?id=18353373.


Sorry, but you're out of touch.

Almost everyone wants the features that JS enables. Literally: Almost everyone. I understand you don't, and I absolutely respect that decision, but it means that you're not worth developing for. Full stop. It's not a matter of mal-intent, it's a matter of financial logistics.

You bought that machine to run code. If you don't want to run code that sites serve you on the internet, don't visit them.

That said, I'd be willing to wager a fair bit that literally every line of code you've run on your machine (probably ever if it's been bought in the last few years) outside of the vendor installed OS and drivers came from the internet.


> That said, I'd be willing to wager a fair bit that literally every line of code you've run on your machine (probably ever if it's been bought in the last few years) outside of the vendor installed OS and drivers came from the internet.

We explicitly decide to install software and we know where we're getting it from. We may not be careful enough, but I certainly trust `brew install` a lot more than I trust a random site I happen to visit.

> You bought that machine to run code. If you don't want to run code that sites serve you on the internet, don't visit them.

I bought my car to drive it, but that doesn't mean that every person I pass on the street gets to drive my car.

You seem to think that visiting a web site establishes a trusting relationship with that site. I think that downloading a site's HTML and running its JavaScript are quite different levels of trust. Especially when the content and the code come from very different entities - eg, the journalist writing the article vs the advertising network the publication uses.

Clearly if I'm using Gmail, I'm trusting Gmail and I don't mind running their JavaScript. But the blanket statement that "if you don't want to run code that sites serve you on the internet, don't visit them" seems out of touch with how pervasively nasty the internet is.

Taking a walk on a city sidewalk does not require eating whatever you find there.


I think this is a reversal of responsibility. You're shifting the responsibility from yourself to a different entity for the choices you're making.

The single safest thing you can do while using the web is to simply be aware of what you're clicking on, and what sites you visit.

> I bought my car to drive it, but that doesn't mean that every person I pass on the street gets to drive my car.

Damn right you don't let random people drive your car! just like I expect you not to click every link you see!

I'd also love it if you'd install an ad blocker, remove just about every other extension you have in your browser (ABSOLUTELY do this for old firefox extensions and IE BHOs) and trust your browser when it tells you that maybe visiting that particular site isn't the best idea.

That said, modern browsers do a really, really good job at isolating the code running in that page from anything you care about.


Correct me if I'm wrong, but it seems that your personal philosophy places absoluely none of the onus for the internet being dangerous and unfriendly on the people who actually develop things for the internet. You instead blame the layman who has no power to control how websites are constructed.

The heck?


I'd disagree entirely. I'd argue that I'm aware of how much effort has gone into making the web much, Much, MUCH more convenient and less risky for the average, day to day user.

It's frankly stunning how much the ecosystem has changed just over the last 5-10 years. And I mean that as a developer who works in the security industry with a focus on browsers/extensions. It's ludicrous how much more secure the web of today is over the web of the past.

That said, it's not yet secure. There's always a risk/reward decision for using the web, especially around HOW you - the user - uses the web.

So to make an analogy: The infrastructure in place between root authorities, the IETF, browser vendors, ISRG (Let's Encrypt is just one), and website developers has done a DAMN good job in making the web less vulnerable than it was.

It's a nicely paved two lane road that goes nearly everywhere.

That said, you are interacting with the entity hosting the site you visit, NOT THOSE GROUPS, when you visit a site.

It's your responsibility to make sure you trust that entity, and do your due diligence.

Just like I wouldn't try to drive my crappy 1998 Mazda Protege offroad - It's dangerous and I would be unprepared.

Its your responsibility to make decisions for yourself (or at least I fucking hope it is... that's a fundamental aspect of a democratic society that I STRONGLY believe in). That means living with the consequences.

It can also mean choosing different service providers that are less convenient if you deem the easy ones too risky. If you're not willing to do that (aka: switch away from gmail if you want js disabled everywhere) and you still want to complain... I find it hard to treat you seriously.


> It's your responsibility to make sure you trust that entity, and do your due diligence.

What does this entail? I mean, ask the average person if they trust the New York Times, the London Stock Exchange, or Spotify. Those are well-known names - sure we trust them. We trust that, as an organization, they are not plotting to steal our identities.

But trusting them means trusting their business people, their IT people, and their advertising partners, not only to be moral but also to be competent. And whoops, all of them have served malvertising in the past.

Nobody has the time and expertise to evaluate every site's JavaScript every time they visit. The "due diligence" you describe would be a very specialized full-time job.

Whereas turning off JavaScript in the browser takes about 10 seconds.


> It's your responsibility to make sure you trust that entity, and do your due diligence.

> What does this entail?

My whole point is that that's up to you to decide. No one else can make that choice for you.

The VAST majority of people have decided that the risks they face today are worth it, and continue to use the web with js enabled.

If you're not one of them, I absolutely respect that decision, but it means you'll have to accept that companies are making financial and security based decisions based on the behaviors of normal people.

That means that when spotify (and lets be honest, every other streaming service) doesn't work without js, you go somewhere else, and use something different.

That's the whole point I'm making. You can make any decision you'd like with regards to your own security, you can make any decision you'd like with regards to the sites you visit. But that site is free to act in it's own interests, including adding features and services that target the majority of their uses.


> You're shifting the responsibility from yourself to a different entity for the choices you're making. The single safest thing you can do while using the web is to simply be aware of what you're clicking on, and what sites you visit.

This seems like a fundamental misrepresentation of how malware is distributed, and how privacy is compromised. People are responsible for the sites they choose to visit. They're not responsible for what those sites serve them, because they can't evaluate it until it's already been served. I don't know what a page will serve me until I visit that page. I can't.

Holding people responsible for whatever content they're served on a page is the same sort of "suck it and see" logic we rightly rejected with "by opening this package, you have consented to the terms of this license". It's a concept that's fundamentally incompatible with informed agreement.

If you want to hold people responsible for clicking "crack DRM now!" ads on the Pirate Bay, fine. But malware doesn't necessarily require outbound clicks, and doesn't necessarily come from dubious sites. Forbes locked out adblockers and served malware through their ad network. Xfinity, the NYT, ebay, Youtube, the Atlantic, and a dozen others major sites have served malware. Newegg, British Airways, and probably Ticketmaster were hit by Magecart, which infected even users with adblockers running.

Why do I disable Javascript? It's not because I don't think I'm responsible for my safety online. Precisely the opposite - it's because I'm not relying on other parties! Magecart was running on major sites two months ago. Adblock, an up-to-date browser, and responsible surfing didn't keep people safe, but uBlock did. A framing where users are accepting any code that ever appears on any site they visit is a framing where we all ought to abandon the internet outright.


> Damn right you don't let random people drive your car! just like I expect you not to click every link you see!

The difference is that I can interact with any number of people, yet only trust a few to drive my car.

You seem to suggest that I should trust everyone I interact with to drive my car, and since that's risky, I should limit my interactions to a few people.

> You're shifting the responsibility from yourself to a different entity for the choices you're making.

Not trusting a site's code is taking responsibility for what it might do to me. Trusting their code is giving them the responsibility to do what's right.

> The single safest thing you can do while using the web is to simply be aware of what you're clicking on, and what sites you visit.

This is good, practical advice for non-tech people. But fundamentally there is no reason the web needs to be unsafe. Viewing a series of hypertext documents is safe, no matter who wrote them. The web is totally safe if you browse it using curl. It's only when we assume that "of course you're going to run that stranger's code" that things become dangerous.

It's great that we can have web applications. It's not great if every page is an application. There should be an explicit step from "I'm casually looking at your page" to "now I want to trust you and run your code".

> That said, modern browsers do a really, really good job at isolating the code running in that page from anything you care about.

That code can't wipe my hard drive, but it can track me everywhere if I let it.

The reason we need content security policies and cross-site request forgery tokens and cross-site scripting protection etc, etc is that our browsers are constantly eval'ing whatever code they're given.


> The single safest thing you can do while using the web is to simply be aware of what you're clicking on, and what sites you visit.

This would be accurate, if watering hole attacks[1] weren't a thing.

[1]: https://en.wikipedia.org/wiki/Watering_hole_attack


Nope, it's still accurate. There are always risks. This is one (it's not a common one, but it's a risk).

To reframe - my statement is akin to saying: the single safest thing you can do while driving is to pay attention to the road.

And you responding with: "This would be accurate, if trees falling on you weren't a thing."

Sure it's a risk, it's not the primary risk.


Are you aware how many requests pages make simply by visiting them? Hae you ran a DNS server and seen how many third-party sites get hit simply by visiting one news page, for example?

I would rather have more control over what gets scraped from who-knows-where and ran on my machine than not.


You have that control.

You have not lost that control.

Why does this matter in the context of this discussion?

If you choose to visit that news site, you're trusting their web developers. You can always manually go and setup tools and systems to block those 3rd party requests (like I mentioned earlier, an ad blocker is a great first step, because it's easy and automatic). If you're not willing to take those steps...

Don't use the site.


You're making an assumption about me, I actually don't use noscript these days (I haven't for at least 6 years at this point) however I will fight for the right of my brethren who do not have machines with six CPU cores and 64G of ram, or those who have had their scrollbars hijacked, or their text to speech software go insane, or their web experience which used to be controlled by them insidiously invaded by people who want to use their webbrowser as a software distribution platform and virtual machine.

Javascript being enforced allows for a sub-par slew of web development which does not cater for performance(minor) or accessibility(major).

I dislike the trend of _forcing_ people to do something that doesn't benefit them. Even if I'm not personally affected.


I'm sorry, but who is _forcing_ you to do anything?


[flagged]


You're talking about a service that this company provides at no charge. No one is forcing anyone to use these services.

If you don't like that, go somewhere else. No one is holding a gun to your head and going "RUN JS OR I SHOOT!".

Why do you assume that you, and the people you claim to be advocating for, are able to compel this entirely separate entity, making a decision it believes is best for the majority of it's customers, to do something that benefits you - a user of a service provided at no charge, and with absolutely no binding contract?


Does anyone force you to use Google’s services?


> Of course users are going to use javascript more, the more the internet demands it, the more people will be willing to allow it.

I think you have this backwards; the vast majority of users like the benefits of js-enabled sites, so they get built. That ship has sailed long ago.


Even so, making the assertion that:

"javascript adoption in the browser will increase in future, therefore we will enforce javascript"

is like saying:

"more people will have passports in future, therefore we require passports to get a bus pass".

By doing that; you make it true.


It's already true. You can quibble over who's fault it was, but it really doesn't matter at this point. The web serves code that users run. That's because the web is the best distribution medium for code we've ever seen.

I bet you've also installed client side applications that came from the web, on a vendor installed OS that came from the web, and drivers for your machine that came from the web.


You're equating my signed/sealed/delivered package management to javascript??

??

??????

Really? You honestly don't see a difference? I have a chain of trust with my OS manufacturer (apple) and a defecto trust with a centralised entity for most of my applications (IE; my company for things that we build, or the home-brew project for most other packages)

I should not have that trust with any idiot who manages to get a signed SSL certificate; IE: the whole internet.


You mean like the signed/sealed/delivered csp policies that browsers have supported for years?

https://developers.google.com/web/fundamentals/security/csp/

To quote the signature section:

'<hash-algorithm>-<base64-value>'

A sha256, sha384 or sha512 hash of scripts or styles. The use of this source consists of two portions separated by a dash: the encryption algorithm used to create the hash and the base64-encoded hash of the script or style. When generating the hash, don't include the <script> or <style> tags and note that capitalization and whitespace matter, including leading or trailing whitespace. See unsafe inline script for an example. In CSP 2.0 this applied only to inline scripts. CSP 3.0 allows it in the case of script-src for external scripts.


A lot of the arguments in this thread sound like the people who don’t want to vaccinate their kids. Right down to not giving a shit about “herd immunity” - not caring if their hacked account is used to attack their less informed friends and family.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: