> they made all top-level event listeners passive by default. They call it “an intervention”.
This is my very problem with Chrome/Chromium right now. The Chrome team does assumption on how things "should" be (in a highly subjective way) and breaks the web.
Another example: they decided to ignore the value of `autocomplete` attributes on `<form>` tags [1], because:
> The tricky part here is that somewhere along the journey of the web autocomplete=off become a default for many form fields, without any real thought being given as to whether or not that was good for users. This doesn't mean there aren't very valid cases where you don't want the browser autofilling data (e.g. on CRM systems), but by and large, we see those as the minority cases. And as a result, we started ignoring autocomplete=off for Chrome Autofill data.
Problem: Chrome now auto-fills wrong parts of forms with username/passwords and this breaks forms that get unexpected data when submitted. And now, they opened an issue on their tracker [2] to track "Valid use cases for autocomplete=off".
This is insane to think that the developer is wrong to use some attributes values, and to assume how a page should behave, ignoring devs intentions and Web standards.
The problem is that some sites decided to use autocomplete=off to impose their feeling about password managers on users and make them more difficult to use. It's the same reason I have to turn off clipboard events because some sites think it's okay to block copy/paste.
So the question becomes what's less bad: for website developers to break their own site, or for Chrome to break other people's site.
I'm all for empowering browsers to override abusive behaviour from websites, but using bad defaults and breaking innocent websites as a result is not the solution.
Chrome is a program that I run on my computer. It protects my interests, not the interests of random crappy website developers doing horrible things like hijacking clipboard events. I'm all for the defaults being whatever is best for me.
Come work tech support for a company/product with a web-based form that has a password field in it (like a CRM or other administrative system). Now explain to users why we can't stop their browser filling in their password in the field that's asking for the other user's password.
I've had situations where I'm configuring a VPN connection on the web interface for a router... then spending 10 minutes head-scratching why the hell it's giving me an invalid PSK message. Oh, because Chrome filled my password into the goddamn wrong field when it wasn't asked to.
I suppose you could tell them to go back to a previous version of the browser.
Of course that would be a terrible idea for a number of other reasons, but if enough people specifically avoid more recent versions of Chrome for this reason, maybe Google will finally do what's actually best for users and make this behaviour configurable.
That doesn't sound like a safe way of resetting passwords. You should supply them a token they can use to set their own password. Your employees should not know your users passwords.
Exactly that. I use this feature as an Admin relatively often in Adobe Analytics. I never am able to know the "real" password a user provides on first login after reset.
A one time password is nothing but a token - maybe not ideally named.
In a corporation, this is totally okay as long as the reset password is safe or the account is locked from external access until the password has been changed.
The employee should of course change their password and that should be enforced policy, the admin shouldn't know user passwords.
Tokens work too but it's a bit of an overhead, especially in smaller SMB where the admin is probably just across the corridor or atleast in the same building.
You're creating a secret token. Both the administrator and the browser should be made aware of this. The administrator, so that they don't send it to the user via an unsecure method (Tweet it publicly), the browser so that it shows warnings if the site is somehow served without ssl or is compromised.
If it's something like a password, it should be treated like a password. Why shouldn't you use type=password for that?
>Chrome is a program that I run on my computer. It protects my interests, not the interests of random crappy website developers doing horrible things like hijacking clipboard events. I'm all for the defaults being whatever is best for me.
>The browser is the agent of the user.
Here I support you, but do you remember that Googlers pressed assertively for hijackable right click, no opt-out history spoofing, and many many many other supremely user hostile features in W3C standards.
Out of all broken web novelties, they prefer to unilaterally break ones coming from outside:
Declarative right click menu from Mozilla was killed by Chrome.
Declarative SMIL animations, same story
They added support for HTTP pipelining and killed it just to force people to migrate to their protocol
They had working drag and drop on mobile, yet they killed it because "iphone's dnd is broken, so we made ours broke too"
They banned few people from bugzilla from insisting on turning off vibrator API from mobile Chrome, yet the moment people started using it for bot detection, they killed it in iframes (denying its use as a hard proof of clickfraud for ad companies)
I always told clients that if you have to choose in between having feature A broken on iphone or having it broken every other browser, to chose to break iPhone. Now, I don't know what is right to say there, probably something like "you need to settle on a variant where stuff is broken in a consistent manner across all browser"
> It's not protecting your interests, it's protecting Google's interests.
But you can always install (or develop) another browser that might do this job better for you.
> If it was protecting your interests, it would be a toggleable setting that defaults to normalized behavior.
The problem with this view is that the vast majority of end users do not want, and in fact will never know about or use a new setting, and so the when doesn't move forward. This is the opt-in problem that Google is talking about. In politics, an analogy is called public choice theory.
Users don't use thousands of sites. If a site misbehaves and users want autofill on it, they should be able to override autocomplete=off for that site.
Google could even make an extension for this to allow users to gather and share a list of sites that behave in a user unfriendly way.
But of course, it was much easier to fuck up half of the Web.
Public choice and mandates are great for things that require cooperation and agreement against race to the bottom (like tax havens), not against autocomplete fucking off.
I have a site that deals with HIPPA protected information. I absolutely don’t want autofill, especially for sign in information. Chrome makes that impossible and just ignores the code.
Could you explain the downside of allowing auto-fill for sign-in information here? I understand that security is a concern, but I don't see how allowing a password manager to handle sign-ins would be harmful.
In a medical setting, most computers are public. Sharing passwords is a HIPAA violation, because HIPAA requires a complete, accurate log of everyone who has looked at or modified a medical record.
My guess is that many medical computers aren't well administrated and leave autofill on, which can easily cause accidental HIPAA violations.
Disabling autofill seems like the wrong way to handle the problem, though. Autofill does not necessarily mean that passwords are being shared; it just means that the user isn't typing them in. Strong policies on the machines in question and ensuring that users aren't sharing each others environments seems like a considerably more complete solution to me. This can be facilitated by tools like https://www.imprivata.com/single-sign-on-sso. Ironically enough, disabling autofill may actually prevent this tool from providing some of the benefits it's intended to provide.
While I agree that autofill on its own is not a complete solution to GP's scenario, it's certainly a potential point-of-failure, and I understand their need to eliminate as much risk as possible. While the most significant aspect to be improved is the security habits of the clients themselves, that doesn't mean that GP and their company should be prevented from doing what little they can just because Google wanted things to work their way.
That's why there are profiles. Even in Chrome. But too in Windows.
Or that's why then the SysAdmin should disable the password manager.
It's not up to the website.
If you have an internal site, you already control the browser, then why do you want to fight the browser from the inside instead of from the outside? :o
I can't remember exact scenarios that triggered it, but I've seen situations where Chrome autofills data other than sign-in passwords as well (also ignoring autocomplete=off).
There was a form where administrator can change certain details of another user profile, and if e-mail (or name) field was empty on a profile, then chrome would autofill it with administrator's e-mail, which can result in unwanted / corrupted data.
Some replies in this thread suggest configuring chrome differently in organizations where this is important to avoid, but when you are SaaS vendor, your users will inevitably blame you for Chrome's behavior.
With the same yellow bar at the top that they use for other issues. Tell them what the site is doing and ask whether they want to allow or block that, and whether to do that for all sites or only this one.
IMO, breaking an established API with no reliable way to work around it is very bad. I have been bitten by this, and only now understand why. I will probably switch to Firefox.
> The problem with this view is that the vast majority of end users do not want
I'm a user.
I want that.
I'm probably not "a majority" but I fail to see how being assimilated to the majority without my consent, without even knowing it is happening, is "protecting my interests"
What would you think of a restaurant that charges you 50% of the bill as tip because "the majority of the customers does that"?
> What would you think of a restaurant that charges you 50% of the bill as tip because "the majority of the customers does that"?
I don't know about you but when I go to the restaurant, I can't change the price of the items. If they set the tip for you, that's pretty much setting the price, isn't it?
I don't really use that auto filling thing that much but the few time I did, Chrome asked me if I wanted to use it on that website. Isn't that what you consider a toggleable setting? I chose to use autofill my user account information on that website.
Firefox would have given you an option. Which someone would have made an extension to control it for each site. In fact, that is exactly what happens there.
Now, Google shoves on you their preference (which is a coin-toss to align or not) and you shout that this is protecting your freedom of choice.
Adblock is "breaking websites". If a website is doing something crappy, I'd prefer to not have that crappy thing happen than experience the developer's true intent.
Adblock is opt-in. If the browser misbehaves because of an extension, the browser nor website is to be blamed.
Chrome cannot decide on a whim that old websites should be broken. It is not how the web moves forward. Take for example Firefox that kept breaking extensions with every update, now they have few compatible extensions.
it's a little reductio ad absurdum but what about defaulting to popup blocking? blink? marquee?
It's not the same thing but maybe worth considering when defending web author's intent being ignored.
I believe Chrome on mobile is the default (only?) browser. So it is opt-out, and a very painful opt-out that requires technical knowledge.
You ship an experience like that, you end up not only breaking old websites but shift the emphasis on regularly updated websites that work on mobile. That was the intent! But it is not the web we know and love.
> I believe Chrome on mobile is the default (only?) browser.
It's default on a lot of Android phones, but far from the only one available. I use Firefox on mine. Opting to use a non-Chrome browser on an Android phone is about as technical as using a non-Microsoft browser under Windows.
They also can't opt-in to adblock or noscript or me choosing to run Netscape version 4. Developers fundamentally do not get a say on what I choose to run on my system; at best the only control they have is saying "nope we aren't even going to try to run on whatever you've got" but even that isn't a guarantee.
> They also can't opt-in to adblock or noscript or me choosing to run Netscape version 4.
This is a gross oversimplifcation
They also cannot opt-in if you don't have electricity in your house or don't own a computer
DOH!
This is not a real argument
A real argument is obsolescence
Of course there's a point when older versions are not supported anymore
Here we are talking about future versions breaking compatibility, without a shared consent, without a deprecation roadmap, that need to be supported, in a Chrome specific way, making Chrome the new IE for no real benefits to the user and a vendor lock-in by Google
When they users will try a different browser, for whatever reason, they will experience a different web, that for the majority of them being non techy will look like a broken web
But is it best for you when sites you use don't work properly because the latest update of your browser breaks the standard?
I'm all for giving users the power to ignore the standard and disable certain features, but forcing it on users without making it configurable sounds like a really bad idea.
A better idea would be to show that yellow bar at the top explaining briefly what the site does and asking whether the user wants to allow that. That keeps the user informed and empowered, rather than subject to the whims of the website and browser makers.
You mention website developers and Chrome developers, but spare a thought for users as well. If I want to use a password manager and the website developer decided to stop me from doing that, I appreciate Chrome helping me out.
How is this different from websites that think it's okay to force you to abide by ridiculous password restrictions (e.g. 6-8 characters, must include digits, upper- and lowercase letters, and special characters but no quotation marks or any known SQL keywords)?
The right thing to do is to bring the issue up with whoever is running the website. If they decide not to act on it because they think they know security/UX better than Google/you, they're stuck with crappier security/UX and the market should sort it out.
I've actually managed to convince a company to stop prohibiting copy-paste on logins by pointing them at the new NIST password guidelines. I don't expect this to work for every company, but if they intentionally don't change, it's okay to name and shame them for it. Whether it's a UX sin or security voodoo.
> How is this different from websites that think it's okay to force you to abide by ridiculous password restrictions
The difference is that disabling of "autocomplete" is a user interface issue, and can be addressed by the browser I use. The problem of ridiculous password restrictions is not usually something that can be controlled by the client.
I like the idea of convincing IT to change crazy password policies, but I don't have the mental energy to navigate these huge bureaucracies.
You don't necessarily have to convince IT. If you instead convince their legal department that by not following the NIST standards for passwords they are opening their company to a lawsuit, that could get results a lot faster.
When IT is convinced they have to decide when to put it into the budget. If they think their policy is not okay, just not perfect the fix will probably be buried in the bottom of the budget pile and cut every year.
When a company does not follow a NIST standard [that applies] that is admissible in court against them. While it isn't an automatic loss they have to defend why they didn't follow the standard. In some cases the defense is not to the jury, but the the judge while can make a "statement of fact" and tell the jury to assume negligence for not following the standards. When legal says the cost of not complying with NIST password guidelines is potentially 10 million dollars that puts fixing the password requirements much higher in the budget.
The relevant bit being "Verifiers SHOULD permit claimants to use 'paste' functionality when entering a memorized secret. This facilitates the use of password managers, which are widely used and in many cases increase the likelihood that users will choose stronger memorized secrets."
But that's a recent change to the NIST guidance. Searching for "Bill Burr NIST" will turn up recent stories about the original author's regret of a lot of the password recommendations from the original publication in 2003 that survived until the update this year.
No, it is an idea. The NIST standard is new enough that I wouldn't expect anything yet. Going to court takes years.IF they settle out of court they probably make not talking a part of the settlement.
NIST is recognized worldwide in a similar vein to the IEEE, IETF or ISO. It's a regulation organization important enough to get to move banks, large companies and outsourcing firms.
A recommendation won't allow you to sue a company contrary to what the other commenters seem to think, but it's enough for any internal employee who works on something to call for and justify a change.
Maybe, does UBS have a branch in the US that you can sue? Alternatively, does the country you are in treat foreign standards as admissible in their court in some form? Does the country have their own version of NIST that is willing to "leverage" the work of another country into their own standards, thus making the NIST standard a national standard for their country? Does the country have their own version of NIST that has already issued a standard? Any of the above are angles to consider before you reject legal approaches to the problem just because the country doens't apply.
Your question is one of the reasons I didn't say the legal route was a better way. It is an option that may get better results in some cases. Even in the US it may not always get the best result.
I'm all for allowing the user to control whether autocomplete is or isn't respected. But the browser breaking web standards to force certain choices on the user is about the worst possible way to go about this.
Aren't cookies a web standard? I can assure you that Safari's policies w.r.t. cookies "break" a lot of website functionality, and quite intentionally so. You may not like that functionality, but the website marketers like it and have spent lots of time implementing it.
So - what's ok to break, and what isn't? In the end, it's a judgement call on the browser developer. I like what another person said here - the browser is an "user agent", as long as the actions are clearly motivated by user interest, I think they're ok even if/when they break some standards.
I'd say, for a start, achieving consensus before breaking is ideal. If the attempt to achieve consensus fails, at least I can see they tried (or I can disagree). Hopefully in the future users can use their leverage to make it not a judgement call on the browser developer, but a judgement call by the community of users. The myriad of features in browsers, however, makes it a difficult arena to enter making a large swath of users subject to these judgement calls.
> as long as the actions are clearly motivated by user interest
It doesn't matter what the road is paved with. Many things are not clear and intent is also not clear. You should not use intent to determine what you are ok with, you should use the action and effect. You may be ok with Safari's approach to third party cookies, or Chrome's approach to cookies, which is ok. But you might not be ok with the next action, and when it hurts you as a user, the reasoning will matter less than it does when you support it.
They do have a consensus about these things among their users. Virtually no one wants their password manager to not work or for third-party marketers to be able to track them more easily.
The people who disagree are third-parties who want to impose their own preferences on Chrome's users. Their opinions should not be taken into account because they are not Chrome's users.
I am a Chrome user, I totally agree that the browser should fill the password for me and not accept third party cookies. And yet I don't want the fields in my intranet app to be filled with garbage.
We get regular complaints from users using chrome about password autofill where they didn't want it. It would be really nice if chrome would honor standards and let us worry about how to make our users lives better.
Give the user control. Breaking behaviour that the user might rely on, is not acting in the interest of the user. If you think it's better to block certain behaviour, at least tell the user and allow them to override your default.
Except in this case they're breaking forms that disable autocomplete for legitimate reasons.
If there is never a valid reason to disable autocomplete and therefore "the standard is wrong", there are ways to change the standard. The standardisation process for Web APIs actively involves browser vendors but allows for building a consensus before jumping the gun.
"How is this different from websites that think it's okay to force you to abide by ridiculous password restrictions (e.g. 6-8 characters, must include digits, upper- and lowercase letters, and special characters but no quotation marks or any known SQL keywords)?"
If the Chrome team could figure out a way to force all of those sites to change their password restrictions through a Chrome update, they probably would.
> a browser should be the extension of the users desires.
In a perfect world that would be the case, because the browser would belong to the user. But how much did you pay for your last browser? Consider yourself lucky when the browser does something you consider beneficial.
The market should sort it out is deeply naive wishful thinking. Numerous sites block copy/paste, block password managers from functioning well, still have password rules from the Pleistocene let alone the most recent NIST recommendations. And those companies are worth more today than 10 years ago. So no, it really doesn't get sorted out. To the market, this is a detail it simply doesn't care about.
Because it is a detail virtually nobody cares about.
How insane would the password rules have to be for anyone to travel 1 hour more to go to a different university? How insane for them to stop playing a given video game? To change banking institutions?
I don't have the answer for others, but for me, the answer to all of those is "pretty insane". Except for the banking case, password security is a minor concern (and even then, the system protects us with anti-fraud laws and what not).
In a sense, the market is sorting itself out: it just decided that it doesn't care much about passwords. In fact, if you figure out a way to be profitable while offering twice the interest rate but every time people log in to your bank they have to dance the robot or whatever, you'd probably still have customers.
I especially hate websites that think it's ever reasonable to disable pasting into inputs! Luckily in most cases you can just right-click -> Inspect Element -> $0.value = 'paste'.
You can also copy text with right-click -> Inspect Element -> copy($0.innerText). Although you'd use $0.value for an input or textarea.
Password manager applications that are not built into the browser use clipboard pasting to insert passwords into login form fields. If browsers honored autocomplete=off, websites could block the password manager applications from entering the passwords (on their own websites).
> does assumption on how things "should" be (in a highly subjective way)
They claim they are making these decisions based on user data which I have no reason to doubt.
> and breaks the web.
Breaks crap websites that are broken already from performance and usability perspective.
I personally think this is exactly what is needed to make the web better as a whole.
Individual developers working for individual companies are rarely if ever thinking about the good of their users, except in a very narrow profit motivated sense, and never thinking about the good of the ecosystem as a whole.
Google is also "breaking the web" by not auto playing videos, not allowing alerts in one tab to block the entire browser, etc. Those all seem like good things to me.
Woah woah woah. Let me recount/nutshell the crux of conversations I've had with my product manager(s) about things like this.
Me: Hey, I'd like to add a work-item to our current sprint that reworks semantic attributes into our site.
Them: What would that do?
Me: It improves the underlying architecture making the markup more usable and extensible.
Them: How does that benefit the users?
Me: Over time it will reduce the TTM for features and allows us to adopt a common standard others use on the web.
Them: So there's no UI and it won't affect their workflow?
Me: Well, no.
Them: Then why would you suggest it? We don't have the budget to add meaningless development tasks.
I might be biased towards thinking there are plenty of developers and engineers out there thinking about this stuff but we don't always get the final say in what is done. Also, a lot of enterprise applications were and continue to be around before many of these standards. It's a struggle to improve what doesn't directly add to the bottom line.
This is true in a very practical sense: Product people will not care about engineering specifics. But that being said, I think you're missing the important take-away from that conclusion:
You must decide that it's necessary. As an engineer, you are the most qualified individual to determine if something is technically necessary. Just like you decided that proper indentation, unit tests, refactors, etc. are important, so must we decide that other elements are.
> It's a struggle to improve what doesn't directly add to the bottom line.
That's exactly the point and you're essentially nitpicking (with anecdata) the fact that he pinned the problem on developers instead of product managers. Regardless of who is making the decision in companies, the end result is pretty much the same.
It brings the site closer to recommended accessibility guidelines, making it easier for e.g. people who have vision loss to navigate. It also reduces our potential legal liability to such users for not meeting accessibility requirements.
We all would like to see this "user data" and make up our own mind. Is the whole open source thing just a gimmick for Google?
>Individual developers working for individual companies are rarely if ever thinking about the good of their users, except in a very narrow profit motivated sense, and never thinking about the good of the ecosystem as a whole.
>Google is also "breaking the web" by not auto playing videos, not allowing alerts in one tab to block the entire browser, etc. Those all seem like good things to me.
Its interesting that you're calling for a browser to not implement standards.
In any case, an advertising company would be the last entity who I would trust to do anything "good" with regards to the web.
> Its interesting that you're calling for a browser to not implement standards
The "standards" are created by the browser makers. It is up to them to decide what the standards are by choosing what to implement.
For instance, Apple can decide to not allow 3rd party cookies, which "breaks the web" and "doesn't follow standards" but it is also the right thing to do.
In practice, the causality is even more delayed than you describe. Browser vendors implement what they want, and then once two or more vendors have implemented a feature and some developers have adopted it, it gets considered for standardization. Standards which are written without a working implementation are widely ignored; there's a whole graveyard around the W3C (see eg. the Semantic Web) where somebody wrote a standard and nobody cared.
If your position is "anyone can do anything" as long as it has some "perceived good" - in your opinion, then okay, that is your position and I have no argument.
With evergreen browsers, that basically requires a dev team to adjust the code and release/deploy a new version every X months.
WHICH IS OKAY, if we're talking about security/privacy, but performance ... not really. Let the market (and the users) decide. If a site takes forever to load, and scrolling is impossible because it takes ages to scroll, and burns a mark in your palm in that time, then maybe you'll reconsider visiting that site ever again.
With evergreen browsers, that basically requires a dev team to adjust the code and release/deploy a new version every X months.
WHICH IS OKAY, if we're talking about security/privacy, but performance ... not really.
It's not automatically OK to break functionality even for security/privacy reasons, IMHO. Browsers are used for many useful purposes that do not involve visiting sites run by large organisations with full-time development teams assigned to ongoing maintenance. Intranets. Embedded UIs in devices. Personal sites full of useful information but with no-one actively maintaining them.
The number of breaking changes browser developers have been willing to make in recent years does not bode well for the future of the Web, a platform which became what it is today precisely because it was a known target for developers and for a few years the industry standards actually meant something.
The Brave New Web is one dominated by ego-stroking contests between the major browser developers and huge, centralised sites with effectively unlimited resources run by just a handful of organisations. Notice that neither users nor original content creators appear in that previous description. The original author here is quite right to call the big players out for that. We saw Microsoft's infamous embrace-and-extend strategy and the way the Web was held back for years as a result. We should be just as wary when the likes of Google sing the same song.
Static sites without maintenance are not a problem, they remained readable all these years, and I guess they'll continue to be so. (ACID test and all.)
And there are non evergreen browsers for those environments.
I agree that unilateral moves are bad for the web, but the reality was always that. Vendors have their own agendas, and usually the overlap is large enough.
Plus the change was always there too. Framesets work just as 'fine' today as they did back in '96, and the same goes for tables, divs, and falling snowflakes in the background DHTML snippets. The churn we see today is because the web became a lot more diverse, a lot of things were and are still live on the edge, some are in terms of performance and creative expression (just think about CSS Houdini, giving even lower level access gradually to the rendering pipeline), some in terms of security (because they depend on some implementation detail of browsers - such as CSRF protections, that took a decade to get sort of right with the Same Origin policy).
Dude, autocomplete=off should just work. Chrome does the WRONG THING. Spec says what to do, living spec says what to do - Chrome does whatever they want. It's BS. It breaks the web.
Then chrome Devs say to use autocomplete=something-stupid to get the specification behaviour of 'off' WTF is that?!
A website is owned by someone. If they decide that in their shop they want to have a carpet floor it is up to users not to visit that shop (if they don't like that), but not up to adidas to turn the floor into tarmac, because that works better
for their shoes...
If the website says to display ads, my browser will properly ignore that. If the website says to prevent the user from switching away from the tab, my browser will properly ignore hat. If the website tries to take over my system and install malware, my browser will properly ignore that. And if the browser tries to break the ability to remember passwords, my browser will properly ignore that too.
Websites consist of code to be interpreted by browsers as they see fit, for the benefit of their users. Those users do not necessarily want exactly the experience the site authors want them to have.
The difference between autocomplete=off and the rest of your examples is that there are actually positive UX use cases for disabling autocomplete on certain inputs (e.g. when you are an admin editing existing users)
Ad blockers have false positives as well. And there's a use case for blocking the user from closing the tab (onbeforeunload), such as prompting them to save/submit what they're working on. But for all of those, the browser is still in control and the question is what provides the most benefit for the user.
So, along the same lines, it may make sense to improve the UI for autocompleting users, or for hinting about the use of the field, to make it easier for sysadmins. But that shouldn't break the more common case of handling sites that just think they're Too Special or Too Important to allow saving login information.
My computer is owned by me. If I download your HTML, I get to do whatever I want with it, using any software I please. I am under no obligation to render it or process it in the particular way you would most prefer. You can suggest that I might want to render your website with carpet on the floor, but if I object to carpet and choose to render all websites with tile floors instead, that's my choice - because it's my computer, and I get to decide what I'm going to do with it. If you don't like that, don't let me download your HTML.
The browser vendor CAN'T install tarmac in shops floor.
The shop controls the server the user rightly controls the experience on their machine. The browser vendor provides an application that runs on the users machine. If the shop doesn't like it they can pound sand or suggest the user would be better off with a different browser.
They can perhaps rightly say that the deviation from standard is unfriendly or sub optimal but one would hope they wouldn't appeal to imaginary authority derived from a bad analogy.
A raven isn't like a writing desk and a client server interaction isn't like a user visiting a physical store.
Analogies can serve to communicate but when you use them to prove a point the only thing you prove is your lack of understanding of the matter.
Users don't even know. And if you offer a professional site, with support, you will field calls where chrome autocomplete confused the user and doesn't behave like the spec. The answer is buried in a chrome bug report. Stupid
autocomplete is completely behavior inside the user's browser. You do not own the user's browser. Would you also like to block zooming or other accessibility features because you don't like the way it looks?
I think I agree with you - autocomplete=off more often than not just makes me angry. It's honestly a big reason why I prefer Chrome, it tends to do the right thing here.
I didn't realize until now that it doing the right thing from my (the user's) perspective was because it was breaking a web standard, but damn if I don't find it useful.
A living standard, in the sense meant here, is a contradiction. The entire point of industry standards is to create a stable foundation that everyone can rely on so they can build on top in a compatible way.
So aside of the fact that you agree with their decisions, how is this at all different from how Internet Explorer's dev team made arbitrary decisions on how things would work in their browser, and in so doing because of their market dominance, influenced how the web grew, looked and functioned for decades to come? And how much technical debt had to be tacked onto every project to address that need (and is to this day)?
I'm all for moving the web forward and addressing stuff like this is a critical part of that, but this is not moving the web forward, this is moving Chrome forward, and in so doing breaking thousands if not millions of sites, and placing the burden of their repair on their developers with no notice and no better solution, just a different hack than the hack they were using.
It's not really comparable to IE. MS had a pretty explicit goal of destroying Netscape - to the extent that they were making deals with other companies to break compat with Netscape.
Chrome is breaking 'autofill=false' on webforms because its really bad to have autofill=false most of the time. For example, this was breaking password managers, forcing users to have a worse experience around something as security critical as entering a password. If the goal is to have the user never remember their passwords to avoid phishing, this is a really important thing.
I'm entirely, 100% in support of them ignoring 'autofill=false' because most of the time I see this as a security flaw on websites. This isn't anticompetitive - they aren't trying to break websites on Firefox, they aren't contacting websites to get them to do some Chrome specific thing. They're ignoring a really bad default that hurts users.
> It's not really comparable to IE. MS had a pretty explicit goal of destroying Netscape
While Chrome has the goal to destroy everything else.
Example 1: Google Chrome spam on youtube, gmail, every website on the internet. I can't count the amount of times my parents called and asked why Google asked them to "upgrade their browser" (hint: It wasn't a new Firefox-build).
Example 2: Sending email SPAM to all Google-users where they are advised to install Chrome if they ever sign in to their Google-account on a new machine using anything except Chrome.
Example 3: Installing Chrome unasked as drive-by installers when you install anything lots of freeware, because Google paid third-party developers to host Chrome as a spyware-like installation in their installer.
Google is using spyware techniques to deploy their fucking browser. Google is literally working on killing all other browsers.
And for the good of the web, this should be reason enough to instantly and permanently uninstall Chrome.
Had Microsoft done a fraction of this for anything they did, you'd seen social media and the EU causing a shit-storm. How come Google gets a free pass?
> Example 2: Sending email SPAM to all Google-users where they are advised to install Chrome if they ever sign in to their Google-account on a new machine using anything except Chrome.
Cite? I've never seen this, and I use Firefox regularly.
> Example 3: Installing Chrome unasked as drive-by installers when you install anything lots of freeware, because Google paid third-party developers to host Chrome as a spyware-like installation in their installer.
Again, cite? Because, again, I've never seen this.
I don’t have articles to link to, but I’ve experienced this on numerous occasions.
Me working with manual testing where we regularly set up clean environments from scratch probably has helped me notice the problem more than a regular webdev.
Because this is absolutely a problem.
> Example 3 cite
I’d say Google it, but it really is a known and documented problem:
Was it breaking password managers? Because from what I understand, that isn't the case, and the bug linked earlier is specifically titled "autocomplete=off is ignored on non-login INPUT elements"
Also:
"For most modern browsers (including Firefox 38+, Google Chrome 34+, IE 11+) setting the autocomplete attribute will not prevent a browser's password manager from asking the user if they want to store login fields (username and password), if the user permits the storage the browser will autofill the login the next time the user visits the page. See The autocomplete attribute and login fields."
> It's not really comparable to IE. MS had a pretty explicit goal of destroying Netscape - to the extent that they were making deals with other companies to break compat with Netscape.
Netscape is irrelevant. The fact is Chrome is using it's market dominance to affect the implementation of standards and in doing so, is breaking numerous websites. That IE is famous for but because Chrome is made by Google, it gets a pass.
> Chrome is breaking 'autofill=false' on webforms because its really bad to have autofill=false most of the time.
That is not the Chrome web-teams decision to make. The point of standards is that the standards should be followed, regardless of your or anyone elses opinion. If you disagree with a standard, you work to change the standard, you don't just change your browser's implementation of the standard, break a ton of functionality all over the web, and then sit there saying "well that's how it should be."
> This isn't anticompetitive - they aren't trying to break websites on Firefox, they aren't contacting websites to get them to do some Chrome specific thing. They're ignoring a really bad default that hurts users.
I agree with this in principle, but then they should be working to see that the standard is changed, and do so in such a way that the other browsers can follow suit and allow the standard to be improved. I don't disagree with the stance Google is taking; I don't like that one company is allowed to make such a sweeping change, and all the people who would get all over Microsoft's or Apple's asses about it if it was done in IE/Edge or Safari, are just all cool with it because again, Google is the golden child.
Netscape is entirely relevant. The question was raised about how different Google / Chrome are compared to Microsoft / IE and the answer is Netscape. Microsoft tried to tie the internet into their own platform, Windows + IE, due to them pushing non-standard rendering and ActiveX. Whatever your opinion of Chrome's break from standards might be, Google simply are not trying to lock people into Chrome on Android / ChromeOS. Remember that Google's revenue comes from ads, not software sales (as was the case with Microsoft). Chrome, Android, etc are just platforms to help leverage ads but they still make money from people running Firefox on Windows, Linux and OS X. So while you might dislike the standards Google are breaking in Chrome, the intent is very different from Microsoft in the 90s with Internet Explorer as Google are not trying to control the web (or at least not with the examples given in this discussion. I'm less convinced their intentions are honorable with AMP).
> Whatever your opinion of Chrome's break from standards might be, Google simply are not trying to lock people into Chrome on Android / ChromeOS.
That assertion is laughable on it's face. Of course Google does brand lock-in, the fact that their brand is further reaching across more markets than Windows/IE were doesn't negate that they attempt brand lock-in at every opportunity they can. In fact Chrome largely was started because Google wanted to control the browser aspect of the experience, otherwise why would they make it?
Because it's an obvious way to get more people to default to Google Search. Just like how IE defaults to Bing. Until recently Google was the biggest contributor to the Mozilla Foundation due to their deal to enable Google as Firefox's default search engine.
There isn't any benefit in Google locking users into their free software if it means users on other platforms can no longer use Google's revenue generating services.
Your point about brand awareness is valid but not really the same as what I was originally discussing (software lock ins). Eg Google's products work fine on other browsers so there's no actual technical lock in however users are encouraged towards Google because of brand familiarity and browser defaults for search and home page.
The more people use the web, the more Google gains. It's really as simple as that. People switching their media consumption from traditional media (TV and dead tree) to the web is what makes Google more money.
How? Advertising budgets follow those eyes. Google is a nobody in TV ads, so doesn't get any part from the (big) money advertisers are currently spending there. If these would switch to targeting the web (where Google is the dominant actor), chances are that huge chunks of those ad budgets will be inserted into Google's giant ad placement machine.
> If we don’t want to repeat the 90’s we have to complain a lot sooner.
Complaining wouldn't change anything. Corporations don't change their policies because a few nerds moan on a few message boards.
> If your litmus test is “as bad as Microsoft” then that’s exactly what we will get.
It wasn't my litmus test. I feel you're missing my point because others are making that comparison and I'm saying the two don't compare. In Microsoft's case they broke from standards to lock people into their paid platforms. Clearly that's bad. But in Google's case the platforms are free and the break from standards doesn't lock users into any platforms. You can make arguments against Google break from standards if you want but comparing the two because they involve a web browser is just clutching at straws.
Now, if you wanted to argue about Google corrupting the web then AMP is a far better example. There are far more similarities between MS+IE and AMP even though AMP isn't a web browser:
• both are a free product that locks users into the company's revenue stream (ads in AMP for Google, Windows for MS)
• both push their product through their massive market shares (Google promoting their AMP CDN on Google Search above the search results, Windows shipping IE)
• both onboarded developers with promises to better user experience while locking them into a non-standard platform.
I could go on, but in short AMP is where you guys need to be worried about with regards to Google shifting the web landscape. Not whether Chrome follows spec on a small handful of specific tag properties. That's a whole lot of hysteria over nothing.
> both are a free product that locks users into the company's revenue stream (ads in AMP for Google, Windows for MS)
Ads on AMP sites don't have to be from google
> both onboarded developers with promises to better user experience while locking them into a non-standard platform.
AMP, while a particular subset of HTML/JS, is still just a subset, aka part of the standard platform and can run in any browser. Calling it non-standard would be like calling it non-standard to use React. It's just a library/framework.
AFAIK they do if you want to be on the carousel above the search results on Google Search. It's part and parcel of using the Google AMP CDN. Sure you can use other CDNs (even roll your own) but then you do not appear on the AMP carousel above Google search results (which is highly valuable real estate).
> AMP, while a particular subset of HTML/JS, is still just a subset, aka part of the standard platform and can run in any browser. Calling it non-standard would be like calling it non-standard to use React. It's just a library/framework.
The fact it's a subset makes it non-standard because you cannot then write standard HTML / JS.
To flip the argument, this debate started because Chrome was accused of not following standards because it ignores some HTML properties (eg autocomplete). You could argue the same as you were for AMP that Chrome is following a subset.
Personally I find it a stretch to argue that x being a subset of the standard means x is a standard.
> [google] intent is very different from Microsoft in the 90s with Internet Explorer as Google are not trying to control the web
How very wrong you are on this one point, your argument was going on the right line but here it took an unexpected turn. Microsoft was the one truly not trying to control the web. The web was a causality for something else (killing java). Now, Google. Google is the one that must control the web. Chrome and android and AMP serve the exactly same goal!
I was there at the time, the whole IE Vs Netscape thing started long before Java became a threat (though you are right about how Microsoft also fought that war as well). Plus Bill Gates has said in interviews since that he was trying to lock the web into Microsofts ecosystem because he saw the threat very early on about how it could be platform agnostic.
While you can argue that Chrome and AMP serve the same goal, they're approaches are very different. One helps diversify a market place and doesn't lock developers onto its ecosystem nor users into its product via anti competitive practices; while the other seeks to create a "second web" who is controlled by Google by locking companies into their promise of reduced bandwidth and users into their hosts and advertising via leveraging their massive market share in search as an effective sweetener/bribe (use us and be placed before the search results).
You can complain about Chrome all you like but overall have more browsers on the market is better and Chrome doesn't lock people into anything aside brand loyalty. Where as AMP is a very different beast.
you have good points. but what I belive gates was refering to at the time was what he was seeing as a threat to windows. if you ever have the chance, buy a used SGI workstation from ebay and play with the system. SGI had an affordable business workstations (pizza boxes) that shipped with irix5. the mais thing for irix 5 was that most of the in house applications (e.g. settings pannel, doc library, etc) were mozilla XUL+js applications! I kid you not. that was 1994 and the electron concept was alive and strong with mozilla 2. iriz 5.3 or 6 (i am sure i am getting the versions all wrong from memory) even had the first CSS implementation! and it was mostly to ease the transition from the motif gui framework to XUL.
this is what gates calles the figth for the web. saving windows. thats why they integrated IE so heavily on explorer. ...again, they did hit very hard in the actual internet ecosystem, but that was a side effect. and they would be silly not to try to take some advantage of that too since they were getting this for "free" while fighting for windows dominance.
Yeah I've played on a few different SGI systems (and other UNIX systems that are now a historical curiosity). It never ceases to amaze me how we ended up on Windows. Well, not me specifically as I switched away around 20 years ago but "we" in the broader sense.
> That is not the Chrome web-teams decision to make. The point of standards is that the standards should be followed, regardless of your or anyone elses opinion. If you disagree with a standard, you work to change the standard, you don't just change your browser's implementation of the standard, break a ton of functionality all over the web, and then sit there saying "well that's how it should be."
You're exaggerating the point of standards, and/or confusing them with specifications. Standards have always been open to (ed: some amount of) interpretation with regards to how they are implemented.
Except that the Chrome behaviour isn't non-conforming. AFAICT, honoring autocomplete="off" is not required UA behaviour. It's a 'should', not a 'must'.
Whether or not you think Chrome "wants" to destroy the other browsers, we can observe that:
Chrome is the most used web browser
Chrome is deciding how the web should work unilaterally.
These are the exact conditions that produced ActiveX and resulted in many websites with an IE6-only requirement. As much as developers dislike having to design around the various browser quirks, the situation will get much worse if any one browser becomes "important" enough that you can forget the others.
Chrome is at around 60% of desktop user agents, which is obviously significant but nothing on the scale of what IE was in the 90s. What's more, Chrome's market share plummets massively when you take other devices into account. So comparing Chrome to IE is a little premature when talking about browser monopolies.
Next lets look at the actual technology we're complaining about. ActiveX wasn't just an Internet Explorer-only feature, it was Windows only as well. Furthermore it wasn't the only feature in IE that wasn't supported on other browsers (VBScript, JScript, WebForms†, different support for HTML and CSS, etc). Where as Chrome doesn't diverge from other browsers all that significantly in comparison.
Chromes changes could be compared better to Internet Explorer with regards to how IE4 changed the way frames were used (inc the addition of iframes). This was really annoying as a developer but actually these changes were for the better - particularly the iframe - and eventually those features landed in other browsers. Including Netscape Communicator.
The W3C are pretty glacial at updating their specs so it's often required browsers taking things into their own hands, eg the browser CSS prefixes. Developers would moan about having to add lots of extra junk to their code just to get new features to work but it was the price to pay for having access to new features that hadn't landed in the various web specifications. So as much as they would moan about vendor prefixes, they'd welcome browsers extending the specification.
The above paragraph wasn't all that long ago and the difference between then and now was that Chrome, Firefox and Opera were all at it with IE still playing catch up with regards to compliance; but now the browser market has consolidated where Chrome (read: Blink) has absorbed Opera, IE has largely died off and Firefox is sadly slowly falling out of fashion. So people are paranoid Google might abuse their advantage. However I really cannot see that happen - or at least not with regards to Chrome. Not only is Chrome's market share small compared to IE in the 90s / early 00s but also the web is viewed from a multitude of different devices. As someone who has managed online shops, games and other popular web-based services I can testify that product owners do NOT want to lose business from portable devices. If just because some product owners are so technologically inept that their primary computing device outside of the office is an iPad and/or their business iPhone.
† I think it was called. But basically it was their 90s tech for writing apps that looked like Win Forms.
> Chrome is at around 60% of desktop user agents, which is obviously significant but nothing on the scale of what IE was in the 90s. What's more, Chrome's market share plummets massively when you take other devices into account.
Yeah. Where other devices are mostly Android phones running Google's fork of Webkit. I.e., Chrome
I don't think it's as clear cut as that. You'll have iOS devices and a few Windows Phones too. Plus on Android it's pretty common (albeit mostly among power users) to install custom browsers like mobile ports of Firefox. Then you have other devices like games consoles. Things get pretty fragmented pretty quickly once you start investigating the "other devices" rabbit hole.
However that all said, when it comes to product support, I've found product owners generally only care about Blink (Chrome / whatever) on Android and Safari on iOS. So there's usually little interest in all the other random browser + platform combinations seen in "other devices".
The only big ones that are independent are Safari (since Google forked Webkit to create Blink), Firefox, and IE/Edge.
And the impression i have is that Safari is lagging, Mozilla is directionless and struggling to keep Firefox relevant, and MS is, well, MS.
Ever since Opera folded and made their browser a Chromium clone, only Mozilla have been carrying the banner for standard correctness. And even they seem more and more bow to Google's decrees (whole virtually cloning the Chrome look and feel with each Firefox update).
Mozilla is directionless and struggling? Huh? They just came out with Firefox Quantum, which is a huge step forward, and I believe might have taken the web performance crown.
Quantum is a great improvement to Firefox. I'm running 57.0b14 ATM (normally I'm quite conservative with anything except Emacs) and quite pleased. The performance improvements are quite noticable. Though I do wish they created a more user-extensible browser, a bit a la Emacs.
"Individual developers working for individual companies are rarely if ever thinking about the good of their users, except in a very narrow profit motivated sense, and never thinking about the good of the ecosystem as a whole."
Are you serious? No one thinks this way? I've worked at several companies where the software I've written was EXPLICITLY written with others in mind. To claim "rarely, if ever" shows that you know very little about this industry. Or, you surround yourself with a bunch of selfish people who shouldn't be doing software development.
> Individual developers working for individual companies are rarely if ever thinking about the good of their users, except in a very narrow profit motivated sense
But the megacorp Google is acting all altruistic and not at all abusing it monopoly powers?
I think many will see this as a type of tyranny, and therefore will be worried about the future implications of yielding authority to a single monolith.
Google have been doing something similarly questionable for a long time by dictating how sites have to be presented if they want to rank highly in the search engine. They're just being more brazen about it now by actually breaking sites in Chrome.
There are valid arguments for both sides, but instead of forcing this upon everyone, Chrome team could have added it as a config option. Set the defaults for autocomplete anyway you like, and then let the user change the behaviour only on some sites. Or if you don't wish to pollute the settings with this, extract this behaviour into an extension and let the people choose if it bothers them enough to go and install it.
Either googles solution is better or its not. If it is then google should make it the default even if it provides an option. Then even if they provide the option 99.999% will use the default rendering the option basically worthless.
If its not they should probably just work how everyone else works.
No, it's better for users who read the auto-filled boxes to make sure they are sane (which is what I do). It's not good for users that don't understand what auto-fill is (i.e. they think the website is suggesting it to them).
If you use that approach on everything then you admittedly get clean and easy to understand configuration, but you also reduce the usefulness of your app and make some of your regular users very unhappy... I stopped using Ubuntu recently for exactly that reason, too many options were cut and hidden away from GUI, and just defaults enforced on you, to the point that Windows Explorer is now far more powerful and customizable tool than Nautilus.
Nautilus is not exactly a power tool. Personally I rather enjoy spacefm.
Its super trivial to bind hotkeys to shell functions that operate on the current directory or files or whatever. I know that some of this can be done with custom actions but its nice to be able to a) bind keys not just provide a context menu item and b) create a custom command with a few clicks.
Example something like autojump or what have you is a common tool for shell users. Its nice to bind a key to pop up a prompt key in a string and have this jump to a directory in your file manager using the same source of info as autojump.
This took like 30 seconds to figure out and set up.
Another example a hotkey show a thumbnail gallery of every image recursively below current dir.
Most of those examples seem "good" because they prevent nasty behavior. This is conformance to the lowest usage, just like the FBI tries to argue against encryption because bad guys are using it too.
But it is the job of browser developers to build programs that serve their users. If I, as a user, want to autocomplete a form field, the website operator doesn't get a say in whether I can do that.
The settings that come with the markup are best understood as a recommendation, a slight hint that autocomplete might not be appropriate here. The final say is always with the user, though.
Sadly, through over-usage on the part of website operators, this hint has become utterly useless.
But completely ignoring it isn't super helpful either. I run into more issues with chrome autofilling in forms inappropriately than I ever did with forms that I couldn't autofill.
I think the solution is to follow the hint, but give users an override button next to or inside form fields that have it set to off.
How about deciding to ignore requests to play a sound? I like it as a user but as a dev, now I have a big report that says the cash register isn't making a beep sounds when the user scans an item. Yes there is a setting to turn it off, but now there are quite a few people confused about why it quit working.
Isn't this a reaction to moronic use of autocomplete=off in a user-hostile way by various websites? In a similar vein to how abuses of disabling the back button, popups etc have led to defensive measures by browser vendors.
I'm one of the folks who pushed for autocomplete=off to be disabled for password fields because of abuse. Basically, browsers obeyed autocomplete=off to shut off password managers; and sites used this to shut off password managers for "idk lol security" reasons.
This isn't what the autocomplete attribute was for in the first place; password managers have a different workflow and saving a password is prompted to the user, so we removed it. Password managers are not autocomplete features, basically.
Disabling autocomplete=off on everything doesn't seem to be a good change. Would love to be proven wrong but I don't think there are any ways autocomplete=off can be abused besides the password manager thing, and it's not worked on password managers for years.
I hereby confess to being guilty of adding autocomplete="off" to a login form, because the client's corporate IT only cared about ticking all the boxes on a security conformance report. We have tried to fight this and some other rules (e.g. log the user out after 20mins), because this was quite a simple website, but the rules were strict like it was online banking at the very least.
Now OWASP has changed their mind, but the damage has been done.
> Since early 2014 most major browsers will override any use of autocomplete="off" with regards to password forms and as a result previous checks for this are not required and recommendations should not commonly be given for disabling this feature. [1]
Corporate security requirements usually seem to come down to some committee sitting down and looking at every available option and demanding that they be configured to the most secure looking option, without regards to how much it actually improves security.
This is a continual frustration of mine, when valuable diagnostic data is disabled for security reasons and then when stuff breaks I'm asked to go fix it blind. Sure you can have gigabytes of audit logs that are impossible to find anything in, but an error message that tells you that some system certificate expired is too much of a security vulnerability.
Autocomplete=off is abused, certainly. It's commonly used to interrupt password manager functionality, in much the same way that copy/paste disabling is used on "repeat new password" fields. (As an aside: disabling autocomplete is a good idea, but only on the password manager level, not the website level. It's defense for user privacy, so employing it on well-meaning websites is worthless.)
But it's not abused in the user-endangering manner that circular redirects and back button hijacking have been. (Specifically, to make scam sites hard to escape and easy to click into.) It's just an inconvenience to users, and honestly I'm not thrilled to see browsers override code for non-security reasons.
There is no general solution; only the end-user is smart enough to know when autocomplete should be used, and asking them to specify it for every field is too much work.
Personally, I installed a Safari plugin to ignore autocomplete=off because it was so annoying. So Chrome is doing what I want my browser to do.
That's just it, maybe it is time to put it back into the hands of the end-user? You don't need to ask the user for every field, just an override for fields that have autocomplete=off. Add a simple mark for "Autocomplete was turned off on this site for this field" where clicking it overrides. You could add similar marks for fields with onpaste handlers to deactivate them.
To be clear, it appears that Chrome disables autocomplete=off for all fields and forms, not just for password fields. The linked ticket is about password fields in Firefox specifically.
You're refusing to use Chrome because it gives the middle finger to websites that want to disable your password manager? Are you making this decision as a user or a website owner?
I haven't run into this issue but the autocomplete=off thing annoys the cr@p out of me.
If you have a simple web app that has an administrator mode for editing accounts, you should be able to turn off autocomplete so your password doesn't automatically get filled in for users that you edit.
Your mistake is thinking a login is a person and a person has an email.
I have run into all three of these scenarios:
1) A login for an administrator area on a server has a different login/password than the main area, and the browser always fills in the main login/password, even though I ask it not to. autocomplete=off would fix this.
2) A new user and edit user screens exist in web apps that are driven by users sign-ups. Sometimes these things need to be coordinated with other systems in a corporation and users do not always have their own email (I know this is a shock for you, but it is very common). autocomplete=off would fix this.
3) Using a remote admin tool to setup a datasource on a server, the username and password fields refer to database credentials, not a person. Yet Chrome keeps filling in my info. autocomplete=off would fix this.
Personally I have found I was more annoyed with forms not allowing me to autofill before the change than times I have seem chrome fill the wrong fields out after.
I had a form on the website I administer that closes a user's account permanently. It requires them to input their password and click a button (with a confirmation dialog). Chrome was seeing that password input and auto-filling it, making it very easy for a user to accidentally close their account. It did the same thing for a "change username" form, which also had a password field to confirm the change. Chrome thought it was a login form and would pre-populate the "new username" field with the user's existing username. I tried so hard to make Chrome not do that.
Chrome also recently broke sendBeacon functionality in a rather unusual fashion. The API is documented to return false if it fails, but apparently it'd be way more hilarious if instead of returning false as documented, Chrome just started throwing exceptions... Oh, and the reason for the exception, a bug in Chrome's security, that will be resolved at a later date.
Regulatory compliance. I've worked on applications that had to go to great lengths to get the target browser platform to not do something like auto-complete authentication credentials.
Philosophically-speaking some Web Platform developers think that it's more secure for the browser to autofill credentials from a keychain so that users can use better passwords and not be burdened with remember N-pseudo-random character sequences. Sounds good. Probably is good.
Whether or not you agree with the above there are plenty of regulations in certain setting that require us to disallow client applications from auto-filling form fields. We have to battle with the Web Platform authors to fill this niche and work around everything they put in our way to stop us from doing our job. Awesome.
Personally I thought feature detection was a smell and was glad to see it going the way of the do-do in the early aughts. Not so glad to see it making a come back.
If I recall in Part 11 compliance (which is how the FDA regulates software in the US) one is required to ensure that "Passwords are not remembered by [browsers] and applications."
From an ISO/IEC/IEEE 29148 perspective the language might be "shall not remember passwords" which would imply a legally binding requirement for compliance purposes.
This doesn't preclude applications from using autocomplete on form entry from password managers; just that the browser is not allowed to remember the entered password.
However in practice I've seen most systems deploy their applications on a target platform that runs the browser in "kiosk" mode, which if I recall from the time, was an IE-only thing. In more modern times we're starting to see consumer-level tablets and devices enter the mix and I'm not even sure if those can be locked down into a multi-user/kiosk type mode. Regardless... the web is a difficult platform for this kind of stuff.
> Regulatory compliance. I've worked on applications that had to go to great lengths to get the target browser platform to not do something like auto-complete authentication credentials.
If I had a penny for every time I've heard this excuse for really terrible unsafe practices just to find out that the developer deliberately misinterprets it to make its job easier... it be rather rich now.
From a historical perspective, this is nowhere near the sort of behavior one saw from Microsoft in the 90’s, but that’s pretty faint praise - nobody wants anything like that to happen again. You shouldn’t have people contributing to competing products out of spite.
I hope we never hit the point again where it is a cliche for FOSS people to bond over shared hatred of a company. I’d like to think we have been inoculated against that. I like to see people who will speak up like this early and often.
This is like Internet Explorer way of doing things all over again. Instead of following the standard, just try to guess what people wanted to do, so you create a new standard.
I am going to have to disagree there - I think putting autocomplete control in the standard was an error, just as a webdev shouldn't be able to not accept copy and paste. That is inherently a user choice.
I am the Director of UX for a CRM SaaS product. The ignoring of web standards (like autocomplete flags) is misguided and totally obnoxious. Chrome is terrible about stuff like this. It drives me crazy.
They've also done something so threatening to users' security that I can't take them seriously on the issue. They've been doing it for years, they refuse to change, and it makes me conclude that they fundamentally don't understand the problem.
Every update of chrome erases their password manager entries. So for every site their password has to be re-entered.
Why is that dangerous? Because that means that my cousins and nephews other regular folks cannot trust Chrome to keep their passwords, so they must either reuse passwords or write them down somewhere. Obviously, a third-party password manager is simply not an option for these folks- they rely on the browser. The browser can get it right or get it wrong. Chrome gets it wrong.
I was a little skeptical too but it seems to be encrypted locally before going anywhere, so it seems like it's alright to use. Anyway, if the deletion's compromising security as much as you say it might be an acceptable trade-off.
Wait, Chrome updates erase your stored passwords!!?
At least for me they don't. I have 2 passwords stored in Chrome (not a site important enough to go into KeePass), and have been through multiple updates and yet to lose any passwords.
Ah that explains it then. I'm currently on Windows, but used Linux for more than a decade previously and password storage in Chrome was consistently problematic. The coordination with gnome-keyring-daemon always seemed flaky... sadly doesn't look like Chrome are in a hurry to improve this area.
There's a lot of confusion about autofill. Specifically for these user management interfaces etc. autocomplete="new-password" was introduced and as far as I know is supported in widely used browsers.
As I've seen others mention, there seems to be a divide between the most user friendly behavior for: (1) public webpage w wide ranging user base that will be consistently updated and developed+ monitored for the widest compatibility, latest security, and (2) the CRM, intranet app, hardware config portal, etc. that will not. Any web devs have recommendations on how they handle the two circumstances?
And, unless this has been fixed (does anybody know?) ..
Autocomplete is a security problem that can be used to get more information from users than they think they are providing - just ask for one field like email but put the other fields to be auto completed as hidden, and the browser helpfully gives the site all the other fields the user didn't realise are being autocompleted..
>This is my very problem with Chrome/Chromium right now. The Chrome team does assumption on how things "should" be (in a highly subjective way) and breaks the web.
And that is how IE used to operate. I think this is something we're going to have to deal with for a long time.
> Turned out, Google wasn’t concerned about your websites at all. It was more concerned about its own product performance, Google Chrome Mobile.
As a web developer, I see this attitude a lot and it annoys me immensely. Another way of phrasing it: Google is putting users first, ahead of developers. This is as it should be. "your" website exists to serve users, if you're doing a bad job at it then maybe it's an opportunity for self reflection.
Janky scrolling behaviour on mobile has been a problem for a long time. Apple also implemented non-standard behaviour for years to avoid it. You should almost never be listening to scroll events in a non-passive way. The vast majority of times scroll listening is used, a passive listener is the correct implementation and just wasn't available when the code was written.
This is proven by the article itself: the change was made in February of this year. Do you remember the internet breaking that day, and all of us rushing to update our event listeners? No, me neither. Did Chrome really break when the web when absolutely nothing broke?
The web is the only platform aside from perhaps some assembly languages that is that stable and that ubiquitous. Websites have always been a "write and forget" deal; there never was supposed to be any feedback loop of developers fixing breaking changes. This is why old websites from 1998 still work in your browser.
This is why there are piles and piles of cruft within browsers and web standards for making sure old behavior that folks may rely on continues to work.
The commitment to stability on the web is astounding; I don't see anyone changing that any time soon. Google should not be breaking things just for its users; because its users are affected when they break things too.
That's the problem with Javascript. When Javascript breaks, it's not the developers who get the error, it's users, who have no idea what it means and no way to fix it. This is not the case with e.g. compiled languages; there's no extremely strict guarantee your code will still compile on a new release.
> This is why old websites from 1998 still work in your browser.
They won't if they use a <blink> tag, for example. The problem is that if we freeze the entire web and demand 100% backwards compatibility, we also can't ever move forwards. This scroll event change is a positive in 99% of cases - should a web site built in 1998 really hold that back? There's no absolute in "putting users first" there - you're either putting the minority user looking at an old site first, or you're putting the majority looking at newer sites first.
Maybe it wouldn't be the worst thing in the world to introduce an "archival" mode in a browser. Nothing published on the web is ever truly broken because it's very well documented what it should do. So we can turn those features back on when viewing an older site that requires them - blinking text and all - while still moving forward on the platform people use every day.
> There's no absolute in "putting users first" there - you're either putting the minority user looking at an old site first, or you're putting the majority looking at newer sites first.
I'm not talking about old sites; I'm talking about current sites. The old sites were illustrative examples of the backwards compatibility in the web.
The point is that the web has always been a platform you can deploy code to without needing to tend to it later.
This post is an example of it breaking a current site.
There is a thing called Forward Compatibility. Not everything old has to die to move forward.
On our OpenGL 4.x videocards still run OpenGL 1.x. And no, fixed function pipeline of 1.x isn't anything like the modern shader pipeline. (Sidenote: many cards that were designed in the 3.x era and nVidia backported 4.x support into their older cards which is amazing to me.)
>Maybe it wouldn't be the worst thing in the world to introduce an "archival" mode in a browser.
But we're talking about a very specific example, where the best option for the vast majority of users was to change default behaviour. There is no "passive event listener still implements active event listener API" equivalent here. You have to make a choice.
I don't get why those guy stand against explicit versioning in JS for the sake of forward compatibility, yet it only leads to no IE6 era JS heavy website running in modern Chrome.
In few years time, it will lead to the same thing happening to Chrome JS ecosystem
yet, a few years old website is most likely broken now because it runs JS code with half of its API interacting functions being incompatible with current browsers
>The web is the only platform aside from perhaps some assembly languages that is that stable and that ubiquitous.
The first part I agree with. The second, I don't understand.
.NET and Java are way more universal than assembly language unless you really split hairs and pretend "any byte code generated by a machine that ends up as assembly = assembly."
> Another way of phrasing it: Google is putting users first, ahead of developers.
For the most part, Google is putting its developers ahead of your developers. They could have e.g. jitted scroll handlers to bail out of passive, but did not, and instead broke all active scroll listeners.
> The vast majority of times scroll listening is used, a passive listener is the correct implementation and just wasn't available when the code was written.
And once again minorities get to visit the mass grave through no fault of their own.
> They could have e.g. jitted scroll handlers to bail out of passive
Could they? From my understanding passive listeners do not stop execution, so even if a specific handler wants to bail out at run time it will already be too late to do so.
> once again minorities get to visit the mass grave
You don't think you're being a touch hyperbolic here? You can still use a non-passive listener if you want to, you just have to opt into it. Defaults should serve the majority use cases, that's the whole point of having them.
I can appreciate this outlook, but I still worry about "just wasn't available when the code was written".
Code rot is very real for websites, and I worry that in the quest to do things 'right' Chrome is undervaluing the incremental harm done by every change. I don't think this action broke the web, but I'm wary of the idea that breaking changes can be casually blamed on the people who didn't update fast enough.
Change is bad, inasmuch as it makes new work for maintainers and harms user experience where maintenance isn't prompt. It's often necessary, but there ought to be an open discussion about timeframes and impact before a change happens. In this case, the discussion largely happened after the fact.
The real problem here is the fact that active listeners are still a problem to begin with.
It'd be absolutely insane if a native app did not get first crack at all input events. This is how they all work (iOS, Android, Windows - take your pick), and the mere existence of a touch/scroll listener is never a problem.
But on the web simply having a scroll listener is such a massive performance issue that it's worth introducing not just a new API to get events after they've happened, but to then make that the default? Why not just fix the performance problem instead of hacking and slashing around it?
No, it's not to serve users. Users don't want broken webpages either. It's to make their own browser look faster at the expense of webpage authors. Because users will blame webpage authors for their webpage being broken. Would they know that they could have this same page unbroken with slightly worse performance, no one would choose the better performance.
Sure, there's something to be said about webpage authors getting off their butts quicker, if their webpage is actually broken, rather than just not quite as fast as it could be, but the majority of webpages are not actively maintained. Chrome is simply breaking those. With only 8 months of a transitional period, there's no two sides to this argument. What they've done is unresponsible in every way.
The problem is that it wasn't only the scroll event but also the mousemove event.
I maintain a library of interactive widgets for an education product with a lot of dragging and dropping and I was hit hard by this.
I understand the reasons behind this, and I agree something thad to be done, but Google is making it difficult for devs to sympathise with the unilateral intervention. It's not the what, it's the how.
that's a load of bollocks. if google wants a better web, he needs to work toward better standards. implementing a vendor specific api by fiat with broken detection is what fucked the web in the first place!
same with apple - "now we'll ignore user-scalable=no and screw every responsive webapp out there"
instead of user punishing badly behaved application, vendor are indiscriminately breaking well behaved apps whether they're doing the right thing or not (ie. apps that use em and thus respect user accessibility settings)
this leaves both developers AND users with a sub-par browser experience - like all canvas games out there now get zoomed on double taps, an animation which is often enough to stress the device gpu so much that the browser crashes.
I'm all for improving the web, but there are ways that work and there's this force feeding crap to developer without tough and foresight, and we as a community need to call what's good and what's crap for what it is.
I mean honestly you should have seen the writing on the wall with user-scalable: it inhibited the user from doing something they wanted to do, it hurt accessibility, it made websites imperceptibly inconsistent, and there isn't a good way of asking the user if they would like to disable zoom.
Since you want to be pedantic the user-scalable and user-zoom viewport tags aren't an official standard. They currently exist in the CSS Device Adaptation Module Level 1 Working Draft.
Here are the comments on the property:
> Authors should not suppress (with user-zoom: fixed) or limit (with max-zoom) the ability of users to resize a document, as this causes accessibility and usability issues.
> There may be specific use cases where preventing users from zooming may be appropriate, such as map applications – where custom zoom functionality is handled via scripting. However, in general this practice should be avoided.
> Most user agents now allow users to always zoom, regardless of any restrictions specified by web content – either by default, or as a setting/option (which may however not be immediately apparent to users).
> same with apple - "now we'll ignore user-scalable=no and screw every responsive webapp out there"
Maybe because 90% of responsive webapps would be better implemented as standard html websites which would improve performance and help preserving battery.
According to me. Maybe even higher than 90%. There are a few applications where you need a single-page app. Google Docs is an example or websites that integrate a messenger (e.g. Facebook). For the rest, SPA are a waste of resources and work worse with slow internet. Reddit is a prime example. UX has deteriorated since they switched to loading everything dynamically (again, only me as a source) with no good reason to implement it that way.
To be honest as a user, I'm glad they made this change, even though as a developer it might be a pain in the ass for a day or 2.
Scrolling on the mobile web sucked for a long time. Yes, updating the default broke many things, but the scope of that breakage was fairly limited in the grand scheme of things (like the author said, sliders, maps, touch-and-draggable things like lists are the biggest impacted, stuff like "sticky headers" and other junk would be impacted, but not "broken beyond usage").
We've seen time and time again that simply giving a developer the ability to fix things doesn't help, and letting the mobile web suck for several years while a few percent of the web slowly learned how to fix this would end up impacting far more people than the "breakage" ever would.
It's not ideal, and I wish that google provided a very quick and easy way to "opt out" of the breakage (like some one-line "polyfill" that reverted the change that site owners could use as a stop-gap until they could properly update their apps to work correctly), but I see their point and as a user I agree with it, even though as a developer this is the kind of stuff that ruins your day.
I personally have not hit a single website that was impacted by this that I could notice myself. That's not to say that I haven't been impacted by the breakage, or used sites that had something break because of it (that I didn't notice). But even knowing that this change was made when it was made, I didn't see any sites that were "unusable" or "broken" because of it.
> "polyfill" that reverted the change that site owners could use as a stop-gap until they could properly update their apps to work correctly
then the site owners would just use the polyfill indefinitely, since it now works again. THe more expensive option of rewriting to conform is not going to give return on investment.
This is why breaking a bad thing is needed - the suffering has to happen. It's like getting the flu - to get better one must get sick first if you've been infected.
Possibly, but if they are that useful to users, those sites either need to be maintained or forked. Your argument could have just as easily been used to argue for Adobe Flash support(OK it's still sort of around, but it's nearly irrelevant at this point).
I am not going to go so far as to say breaking backward compatibility is never the right way to go, but "those sites either need to be maintained or forked" is not what is likely to happen; rather instead they'll just say "you have to use such-and-such a browser at such-and-such a version" and lock it in that way for a long time. Consider how big a business it was (is?) to have VMs specifically to keep running Web apps in IE6.
Maintained by who, forked by who? You can't just fork someone's work, there is copyright. Also any saved copy of website(think web archive) will stop working.
The same way getting your flu shot can make you feel worse for a bit, but can prevent you from getting the flu which will make you feel worse for much longer, and it can help prevent others around you from getting the flu.
Yes, it breaks some websites, and for those users in that moment things might be worse, but it is making the web better for significantly more people, and even the users affected aren't really "impacted" by it in most cases (generally only seeing "sticky" headers warp around while scrolling outside of a few specific types of applications).
This is a really paternalistic attitude. The local presbyterian church down my street can't afford a full-time web developer to update their website every time Google decides to break it. The idea that sacrificing actual functionality in exchange for smoother scrolling makes the web "better" is a claim that makes sense only in a terrifically isolated bubble.
Yes but not all breaking changes are equal. The church down your street was extremely unlikely to even be impacted by this change let alone have functionality broken by it.
This only affected websites which were using blocking scrolling event listeners. Scrolljacking, some mapping websites where you were able to tap and pan around, and some sticky headers, and only on mobile devices using chrome.
And of those features impacted, most weren't even "broken", they were simply delayed until after the user lets their finger off the screen. A "sticky" header will still work with this change, it will just "snap" into place after scrolling. Scrolljacking content will act in a similar manner.
I'm completely against breaking the web wherever possible, but in this case I really believe it to be worth it. Smooth scrolling makes sites usable, janky jumpy websites on mobile are so infuriating, and on lower spec devices can be completely unusable. This makes them usable again, it wasn't just "breaking the web" it was "fixing the web" for many people.
In my mind I see this like I see popup blockers or even adblockers built into the browser. Yes, they break functionality, but they do so for the user, and if the website needs to get around this change, they can easily do so without the user having to do anything (unlike popup and adblockers in most cases).
> The church down your street was extremely unlikely to even be impacted by this change
Why do you say that? Let's say, for example, that they use Discourse [1] to provide a web forum for churchgoers on their website. Oops! Mobile scrolling is completely broken for them [2] until they manage to update Discourse to the latest version. Is this the flu shot you were talking about?
No, mobile scrolling wasn't "broken" for them, scrolling via their "timeline" feature was broken.
And while they would need to update, if they were using the hosted discourse plan it would have been fixed for them, and if they were using the self-hosted plan, they are running that on a linux server which needs updates and maintenance periodically anyway or a broken timeline will be the least of their problems. Luckily Discourse will email you when there is an update with a link to the admin page that lets you one-click upgrde the Discourse install on your server from the web UI.
So yes, this is exactly the flu shot I'm talking about. A product was broken in a fairly minor way with this update, and a fix was developed within 24 hours which consisted of 2 lines of CSS (both the same, but in 2 different spots). And now the product and the web as a whole (on chrome) are less laggy and janky on mobile devices.
> No, mobile scrolling wasn't "broken" for them, scrolling via their "timeline" feature was broken.
So, mobile scrolling is broken, but not all the time. Is that really the distinction you want to make right now? For the individual who said "[c]omplaints from my users are starting to come in", is that an adequate response?
> Luckily Discourse will email you when there is an update with a link to the admin page that lets you one-click upgrde the Discourse install on your server from the web UI.
Please do tell that to the parishioner whose son set up the website a year ago before he left to go to college. We all agree that software upgrades are important for security, but that doesn't justify imposing an additional artificial upgrade burden due to a backwards-incompatible breakage to the web.
The timeline feature is the sidebar that shows you where in time since the first post you are looking at. When you drag along it, it can move you to that date and time. It's a cool features but not integral to the application.
Scrolling still worked... 100% of the time on 100% of devices.
And as for the upgrade, if the parishioner can't click [0] in the web UI (a link to which was emailed to them), then I'm not sure how they are using discourse at all. And IMO they have no business hosting anything themselves as it will most likely turn into a DDoS bot within a year.
But again, there is no need to do this as the forum software worked completely fine for it's core functionality and only scrolling via the timeline was broken.
You're showing a pretty serious case of cognitive dissonance here. What previously was a theoretical problem was actualized with a real world example with actual affected users, and you've shifted to an entirely subjective argument that the breakage is "not integral to the application".
You also chose to rationalize the breakage by shifting the blame to the victim -- they have "no business hosting anything themselves". Well, great, but that isn't going to stop them. Maybe they want to ensure their web forum isn't censored by a hosting provider. It turns out that people still host their own websites in 2017, and until they all shift to centralized monolithic providers these bugs will persist for years, and you will persist in rationalizing the breakages as "flu shots".
Reread my first comment. I pointed out that the breakage would be limited in scope. Very very few applications were broken by this (broken being defined as having it's core functionality unusable).
And you seem to be completely ignoring the fact that the upgrade "path" is one click of a button. Whether you agree with my stance on the responsibility of hosting your own internet connected server or not, it's a one click button upgrade, to a fix which was made 24 hours after the problem appeared.
If your problem is a minor UI breakage of a side feature of a well maintained web application that doesn't get updated because the admin doesn't know how to click a button in response to a fix which significantly improved scrolling performance for all chrome users, how do you justify any changes at all? Call it rationalizing if you want, I'm saying it's worth the extremely small amount of pain for the bettering of all chrome users.
How can you change anything at all if that is your threshold for unacceptable? How can you even update a single line of code if you feel that such a small amount of breakage is not allowed?
Against my better judgement I did a reverse image search and found the post that you found that screenshot from [1]. The post is full of people having exactly the sort of problems you would expect from an automatic software updater, along with instructions from Jeff on how to manually update using SSH and git.
These things happen. The world is messy, and whether due to technical inadequacy or run-of-the-mill software updater bugs, web backends will not always be updated. As you've noted, this could eventually lead to security problems, and I agree -- but that doesn't justify intentionally breaking their code. To say that their breakage is justifiable because they'll be 0wned at some point anyway is an absurdly darwinian argument.
To address your last question, the way you update a system like the web is exactly the way we always have -- accretion. With very few exceptions, websites from the early 90s still work today. We didn't remove the `table` tag when we introduced flex boxes. We add new APIs while preserving existing functionality. If you want to see a fantastic talk on this exact subject, watch "Stewardship: The Sobering Parts" by Brian Goetz [2]. It is well worth your time.
And I agree that we shouldn't break things lightly, but in this case that wasn't possible.
The options were let website scrolling be laggy and janky on mobile, or break a very small amount of websites. (and your point is that things may never be updated, so no amount of "giving them the ability to opt-in to these improvements would ever help)
And I stand by my thought that they made the right decision here, despite the very few easily upgraded sites that were impacted.
> The idea that sacrificing actual functionality in exchange for smoother scrolling makes the web "better" is a claim that makes sense only in a terrifically isolated bubble.
It's a simple question of cost benefit analysis. Some users are harmed by slow scrolling, others will be harmed by the update. There's not some obvious reason to prefer one group to the other absent an assessment of how many users are impacted and how bad the impact is.
In this case, the impact of slow scrolling affects almost everyone, and it's pretty bad. For a long time, I basically wouldn't bother to use my web browser on the phone.
The impact of breaking some scrolling behavior affects the users of a small number of poorly maintained sites, and in most cases, not in a way that actually breaks anything.
To me, the "terrifically isolated bubble" approach is to focus only on the part of the user experience you care about rather than balancing the pros and cons to all users.
My best guess? Because he thought it would be cool.
I'm not saying the decision makes sense. It obviously doesn't. But small organizations like a local church just don't have the expertise or resources to make good decisions about this sort of thing.
Saying they deserve what they get is pretty unhelpful - it is completely unrealistic for them to hire someone who knows what he's doing. They can't fix that problem. So what you're basically saying is "parochial institutions without expertise or money deserve to be screwed over." Personally I'd like to live in a better world than that.
Of course, I also agree with the decision that Chrome made, so... there's that. But the fact that groups like the church got screwed over is unfortunate.
I explained this in my first comment, but I'm also basing it on my own testing in february when this came out, as well as from user feedback for years from users of mobile safari which has done this for quite a long time and it has greatly improved the scrolling performance in their browser.
But how does this apply to the majority of users as you suggest? As a user, I'd like the website to keep working. I don't agree with simply breaking working websites via a top-down dictatorial approach.
Because in my personal case a majority of users would take smoother scrolling over a small UI bug that doesn't impact the usability of the application which is fixable in a few hours at most.
Because any websites that need this "blocking events" functionality can opt-in to it quite easily.
Because as I explained in my first comment, the breakages didn't affect that many websites, and the sites it did affect were mostly just small UI bugs that didn't impact usability.
Yes, some sites were impacted and broken, but they were the extreme minority, and even if you accounted for all of the users of all of those sites I'd be willing to bet that it doesn't come anywhere close to being a "majority" of web users. Hell I'd wager that it's not even close to 1% of web users that were negatively impacted by this change in any way.
I hate you[1] when you interfere with scrolling. Yes, some people do it right and so on. You are not one of them. Please. Stop.
Yours,
A user that will close your website when scrolling is messed with.
---
[1] not OP, but the average developer, which, under time pressure and without many resources, cannot test all desktop/phone and browser combinations. Assuming they even care.
Yeah, not a web dev myself, but this seems kind of like a new <blink> tag: very rarely makes for a better user experience, but tempting for sites to use.
Unfortunately, browsers have a long history of giving sites the power to do things I probably don't want, like opening new windows, moving windows to the back, opening alerts that are implemented as modal dialogs (which block all tabs, even unrelated ones), and disabling context menus.
I guess they are trying to provide a rich set of capabilities to allow web devs to make neat things, but from my perspective as an end user, they're all about could, not about should.
If a Chrome update breaks the governmental website that relied on some "never used" feature and the user can no longer, say, apply to unemployment insurance, real people would be getting affected in very real ways.
We wanted technology to part of peoples lives, now it is, we have to own up to the responsibility.
So many websites have "Use with Chrome" that if a user sees the website not working in Chrome, it's likely that the user will assume the website is broken, not the browser.
This page doesn’t do scroll-jacking. Its state appears to update asynchronously based on the scroll position. That is the correct way to do this sort of thing.
Scrolling on this article is buttery smooth on my Apollo Lake (1.1GHz Celeron) netbook, on both Firefox and Chrome, even if the background animations aren’t. No jank whatsoever.
meanwhile... once I scroll to the globe, Chrome locks up.
:|
Scrolling in FF is indeed super smooth, but that background janks all over the place. Not sure what I'd prefer, tbh, though agreed that the vast majority of scrolljacking is utter garbage.
First of all I 100% wholeheartedly agree with the message of this post. However I have a minor quibble:
> Chrome broke half of user websites, the ones that were relying on touch/scroll events being cancellable
Either I badly misunderstood or the author is asserting that 50% of all websites rely on touch/scroll events being cancellable. That's inflated by at least 100x. Exaggeration is doing no favors here; the rest of the article is pretty rational despite it being clear that the author is pissed off, but this one sentence undermines it.
---
Unrelatedly:
> We really don't have more than anecdote (and our metrics) on the "support" side, and no precise way to quantify the breakage. I'd love to have a more quantifiable way to make these sorts of trade offs.
(Emphasis mine.)
Wow. So they're breaking backwards compatibility in a standardized web API with no plan for how to measure the fallout? That's not very nice.
"Either I badly misunderstood or the author is asserting that 50% of all websites rely on touch/scroll events being cancellable. That's inflated by at least 100x. Exaggeration is doing no favors here; the rest of the article is pretty rational despite it being clear that the author is pissed off, but this one sentence undermines it.
"
The funniest part of this sentence is that it cites no source.
Meanwhile, Google claims very clearly that they analyzed websites and determined pretty much nothing would break.
Given the internet did not break that day, i'm going to go with "They were probably right".
Especially since, unlike this website, they have the crawl data to know.
I side with Google on this one. IMO we should be breaking JavaScript more often, especially in the name of performance, to make people use less JS and simpler JS on their websites.
Congratulations on your opinion! You are free to disable JavaScript on your own browser and be rid of it forever. Meanwhile, in the real world, JavaScript is never "getting off the web".
Maybe, but the really heinous decision is digitally altering how websites get delivered to the user to make your own product look better. Even if the newer listener form had been around for several years, it would be unacceptable to essentially try to decree a new design pattern by taking websites hostage for browser performance reasons.
Consider that, to the user, these are all seen as website problems. Breaking Javascript with consensus is great - breaking designs without blame is a slap in the face to web developers.
This is a straw man argument. Randomly breaking stuff for the sake of breaking stuff is not what this is about. This update had a clear intent and only "broke the web" for sites running code that was already risking breakage by deploying an anti-pattern, namely blocking on scroll!
I'm no believer in Google's faux-altruism, much less sympathetic to their cause (see: AMP). The rollout could've been less agressive, sure. But I don't think equating a step towards sane default behavior with wrench-in-the-gears chaos adds much to the discussion.
It would be a straw man argument as a reply to the article, but it's a reply to a comment that advocates breaking the JS to make developers use it less.
I think that's a mischaracterization of the comment's argument, despite its perhaps simplistic language. Maybe I'm wrong but given the context the spirit of the argument seems to be "let's load less shitty JS," i.e. less code that does stuff like block on scroll. So in my view the argument is really not about the developer experience at all; it's about encouraging code with concrete benefits for the end-user. Breaking "if" semantics is just inducing chaos to no one's benefit.
Hahah I was but since you put it this way I thought what would happen and it would be glorious.
But you know whats gonna happen right? Someone will create a “jqif” library that will run same if 10 times (maybe throw in a transpiler) or whatever then we’ll have 10 times slower javascript that fails with 1/10^11 chance.
I think JavaScript is largely a pox on the web. I hate almost every website I come across today. I just try to ignore all of the bullshit for as long as it takes to consume the actual content, and 95% of the bullshit is possible thanks to JavaScript.
Do you think if JS weren't available advertisers and marketers wouldn't try everything to shake some $ off your pocket? You would be pissed the same way.
Also if it weren't JS, would it make a difference if the bullshit is created by lisp, java or erlang?
That's just laughable. Somebody at some point would have created an alternative browser with client side scripting. Otherwise you end up with static text and video. Wonderful. And plugins! Because you don't allow scripts :)
I don't think that it was a mistake at the time. I think that in retrospect it has been abused, and many of the things that justified it are now possible without it. It should be deprecated and removed.
Yes, I know this is a fool's errand. I don't actually expect this to happen.
A UI markup language could define forms that are "desktop-like" without involving a Turing complete language.
The paradigm of the HTML form with controls and a submit action has not been developed to its full potential, because it was basically derailed by JS.
It could have a lot more widgets, and there could be some standard way they exchange and validate data, without a full-blown client-side programming language.
For instance, say you want the user to pick a date. There could be a standard widget in HTML for doing that that could be dropped into a form, and it could be configured to communicate its value bi-directionally with some other field. (Perhaps like cells in a spreadsheet or that sort of reactive thing.) A rich set of standard widgets with flexible styling and layout, communicating the content of the data model to and from the server. Something like that.
I don't understand your argument? You seem to agree "we should be breaking JavaScript" but you're against backwards-incompatibility...which is just a way of saying "breaking JavaScript."
except now I encounter broken sites all the time because Chrome is not my preferred browser. I think google is in the wrong and doing everyone a disservice. This is a step backwards.
I generally use Firefox myself. What I think is hilarious is that a lot of sites are half-broken by my NoScript settings, but I never realized they had a preference for Chrome because that code doesn't even run.
There are no more websites. There are applications, delivered on applications platforms, using css, html, and js as programming languages. Chrome is the application platform, and Google is the company shipping it. Like all application platforms, the interests of company who designs it always come first.
Let us return to a moment to the WWW's ripe vintage of 1995, when Netscape Communications Corporation released a browser which had 3/4 of the browser market share within a year of release. Their product was advertised as a consistent universal interface to the web. They created custom features for their product, such as SSL, and JavaScript. But many of the features were released to outdo its main competitor, Microsoft, who could dedicate far more resources to development (they had so much cash they didn't even need to charge for the browser!).
Netscape Communicator attempted to be a groupware solution combining multiple products to provide a complete solution for office and enterprise needs. But the design was too monolithic and complicated, and development was eventually halted. Ever since then, many organizations have attempted to build complete groupware solutions, but have been weighed down by the difficulty of developing such a complex suite of applications.
And so came the future. As the web's technologies progressed, so did the capability of content delivered through a "standard" web browser. Even though browsers have traded back and forth over who supports what, most of the time the browser with the majority market share holds all the cards. As long as that browser works the same on multiple platforms, they don't need to worry about cross-browser compatibility, because that isn't their goal. Netscape's original goal was to kill Mosaic, and they did that in spades.
Today, Google provides a mostly-complete groupware solution, delivered on its favorite application platform: its own web browser. The costs of developing and shipping the software to end users are much lower than traditional native apps, and they increasingly control more and more of the pipeline, even to the actual computer (Chromebooks are designed to cement Google's ownership of computing resources, freeing them from the constraints of other vendors' platforms).
They not only don't want you to have "less JS and simpler JS on a website", they don't want you to run a website. They want you to provide an application which runs on their application platform. In service of that goal, they have developed dozens of web technologies designed to further their own application platform, just like the Netscape of old. If they have to break compatibility with other browsers to do it, that's fine with them.
The bizarre thing is breaking existing websites on their own browser. If you can't browse the web reliably you'll stop using their browser, and then their app platform and apps are in danger of becoming obsolete.
Except that Firefox LTS and other minority browsers suffer. I already use a couple sites that disregard all old Firefox compatibilities and use JS that break things up
This has nothing to do with making event listeners passive by default. If you don’t do feature detection then of course your website will not be backwards compatible.
If you care about backward compatibility there exists a polyfill for this functionality.
In saying that "Devs assume Chrome", I'm asserting that they do not do feature detection.
However, that does not absolve Chrome of breaking backwards compatibility, workaround or not. Tossing the responsibility of dealing with their breaking changes back on the developer (with little notice) is what caused this article in the first place.
If Chrome shipped with opt-out UI option to disable janky scrolling, would it be any better?
Web is not a static target. Being afraid to break things gets you a decade of Flash websites. Things that are important will be maintained, and things that aren’t… well good riddance.
Personally, I think we are way past due HTML6. A sort of Vulkan-like platform for Web[1]. HTML5 should be build on top of this next-gen low level platform. That’s the way forward if Web wants to remain competitive. Otherwise in 5 years we’ll all be writing Android Instant apps.
OK, this sounds bad. But... Devil's advocacy here: are there any real-world examples of actual sites whose event behavior was actually broken by this change in Chrome 56? It happened a few months back, and I don't remember anyone complaining.
I mean... it broke the author's app. Probably a few others somewhere. But it seems not to have broken anything significant.
I guess I fail to see the concern here. It's an edge case of an existing API that apparently "no one used". Google found a way to get a benefit from exploiting a "change" in this "unused" API, presumably tested to make sure it was unused, and then went ahead and pushed the change over an 8 month period.
Is that really so awful? As someone who lived through the early '00's and IE, this seems pretty benign to my eyes.
It broke our web apps. Specifically, drag and drop list reordering functionality and image cropping. I was pissed when it happened. It wasn't a change that a typical content site would be broken by, but a lot of web apps were: drag/drop reordering isn't a very unusual feature these days.
There's a demo I wrote using jQuery UI Draggable that used to work with touch on mobile (using jquery ui touch punch) that no longer works.
I actually had no idea what could've broke it until I saw this submission. It's still not going to be fixed though since I don't remember the code. I assume a lot of people are in a similar position.
> As a user, I certainly do not care about “being part of moving the web forward aggressively”. Why should I? I like my stuff working, not broken.
Actually, I do want to be part of websites being faster, and I don't care about the functionality that is being broken. Performance isn't a secondary concern - if it's bad, the site is unusable from my point of view.
Your shitty scrolljacking site breaks the web, Chrome is trying to fix it.
This is a change to how Chrome deals with sites that use onScroll. In their current state, most of these are broken as far as I'm concerned. Chrome's changes will partially fix a lot of these sites, at the cost of breaking some that aren't currently broken. That is a win for the user.
My only problem with this change is that developers can still override the default and cancel scroll events. The handful of legitimate use cases this enables aren't worth the cost of letting shitty designers abuse it.
I think what it comes down to is backward compatibility for the web. There are SO MANY websites relying on old behavior that introducing new behavior is going to break some sites that don't KNOW about your shiny new shit.
IE used to have this kind of thing too, anyone here remember "Quirks Mode" and so on?
What web browsers sorely need is a backward compatibility standard that they can STICK TO. Such as feature detection as a first-class API to be tried first. Perhaps this way any new features can be detected across browsers. Like, they would actually have to coordinate to name EACH new change the same in EACH new browser.
But this is what you get when you have multiple browser makers. Some just won't coordinate with the others and you need another layer - a library - to abstract away the differences.
Which is why it would be super-useful for browsers to add content-addressable protocols (read: not just http) to fetch files from any website in a DHT. So these libraries can be loaded ONCE and become cheap to use!
Does anyone know any mainstream browserd planning to implement, I dunno, IPFS? The only one I've heard of doing this is Blockstack and dude, how much adoption does that have exactly?
Last question -- can there be a browser extension that can intercept http requests by their Subresource Integrity checks, and load them on demand as if they were content addressed? Maybe use service workers in Chrome? Is that possible? I would be willing to partner with someone here to write such a thing.
> Does anyone know any mainstream browserd planning to implement, I dunno, IPFS?
It's actually not too far off - IPFS is already capable of running a full js-ipfs node in a WebExtension's background page and expose its API to content pages. What's missing is minor things like streaming to/from the background page, and WebRTC in the background page.
For proper ipfs:// protocol support in the address bar and in links, a better protocol handlers API in WebExtensions is required though, which will take some more time. Basically right now with the ipfs-companion extension, ipfs:// URLs get rewritten to http://.
No, Google were concerned about your websites. Your mobile websites which are so heavily overloaded with JS that basic interactions like scrolling don’t work.
Complaining that Google “broke the web”, when mobile developers have been making it slowly unusable—and unused—for years is pretty hypocritical. All the feature detection and backwards compatible changes in the world won’t help developers when their entire userbase has fled to walled gardens like Facebook. But I guess some people will resent anything that forces them to accept short term pain, even if it’s essential to their long term survival.
So this is a good thing because the gatekeeper is Google instead of Facebook?
How's about adhering to standards? We gave Microsoft a hell of a time for not adhering to standards, but Google gets a free pass now? Because "performance"? (read: some negligible gains on some synthetic benchmarks)
Then let's stop pretending: let's scrap the W3C and go back to the good old days of "Best viewed on Netscape Navigator at 800x600".
Let's also complain about Mozilla because they are removing Flash support and thus breaking backwards compatibility. It's obvious that they are all 'evil'.
It's here and it's basically the brutalism of the web, see Craigslist or the Drudge Report, or just look at Hacker News. These are not "web 2.0" designed but they are easy to use and well organized.
Except that Hacker News website is just dumb, lacks a ton of functionality and gets very basic design wrong. Have you looked at that upvote button? Is this a website for ants?
Can't they just add another argument for the options map without breaking backwards compatibility?
That's ugly as hell too, but in a way that preserves backwards compatibility.
My feeling is that so many compromises have been made to maintain compatibility in JS/DOM-land that it seems capricious to make this kind of decision now.
> Can't they just add another argument for the options map without breaking backwards compatibility?
The options map is a recent addition in the first place, and they want passive listeners by default. They originally made passive opt-in, then decided that they wanted it set.
It really feels like the Chrome developers have forgotten that they're providing a platform and not an in-house Google service.
It happens very often - developers with experience building apps don't always manage to build tools for other developers very well. They focus too much on the end-user and disregard their platform developers too much.
The balance needs to be somewhere but I doubt they have it in the right place currently. For example, how do Google's own apps disable autocomplete if autocomplete is ignored?
I agree with you on 'lean'. But say I'm building a web-app that accesses medical records and I don't want sensitive fields being auto-completed by a browser.
Forcing autocomplete in this instance does a dis-service to both users and developers.
Why not? The user knows that the browser remembers field inputs, he wouldnt expect it to be different for your website. So if he really wanted the browser not to remember anything he typed in, he would use private mode or clear his recent history afterwards.
Because then there's the teacher who has to put in medical data for a whole class where everyone got the flu. The teacher is happy the browser does it job like always and that you couldn't dictate what is best for him.
"In summary, HIPAA does not specifically limit the use of drop down menus or auto-complete, but if patient information was exposed to the public through these features your business would have failed to control access to the protected health information."
So now there's a risk that if a user of a medical information system uses a shared computer (say one at home but that friends and family occasionally use) you can't be sure to have protected control. It's all very well saying 'users should use private browsing mode' - you try enforcing that consistently when you have hundreds of users.
Ultimately, you end up having to restrict access which is why I felt it does a dis-service to users.
You know very well that for your own computer it's very unlikely that it has a key logger. But not for public computers, so it's a good thing that users are aware of that.
> "Browser vendors have their own agenda. It mostly includes making their browsers look fast, sometimes at the cost of your websites become broken."
I feel like you'd know the tree by it's fruit. I have a little more faith that vendors like Mozilla wouldn't pull a punch like this; might have been more receptive to community feedback, not that anyone here needs a continued lecture about Google.
There’s a good point here but the clickbait trappings are holding it back, especially since passive listeners aren’t some obscure edge case which only Google needs.
My point was simply that while there's a good technical discussion to be had here, the doom and gloom styling hurts it. There is a reasonable point about backwards compatibility, the conflicts between Google owning Chrome and also making web properties which compete with other companies, etc. but there's a lot of hyperbole like this:
“Now, this is a terrible thing to do. It’s very, very, very bad. Basically, Chrome broke half of user websites, the ones that were relying on touch/scroll events being cancellable, at the benefit of winning some performance for websites that were not yet aware of this optional optimization.”
None of those claims are supported. If Mobile Chrome broke half of the sites on the web, we'd have heard a lot more outrage over the last six months, and the very strong language fails to consider all of the broken code which was making the web experience worse for almost everyone.
Again, I'm not saying that that the technical discussion isn't useful but that “breaks the web” seems unnecessarily hyperbolic. The fact that the React team is struggling with a simple JS/CSS change seems to say a lot more about the support cost of building huge JS frameworks which duplicate core browser functionality than whether the Chrome team should make decisions to help mobile performance.
I've seen similar posts when Apple announced that there won't be Flash on mobile Safari. And I agree with Chrome team decision to force passive on document level listeners.
Also, note that not many people complained about "blocking
video/audio autoplay on mobile browsers", because it's good for users.
BUT:
- Passive event listener detection is horrible and it baffles me that they start thinking about proper way only now.
- The announcement of this breaking change was quite silent. Chrome has so many influencers on social media, but almost no one shared/explained this change properly.
Without commenting on the "intervention" itself: why change the signature of addEventListener? It would have been trivial to add a new addEventListenerEx API with a redefined final parameter, and this alternative approach would have made feature detection trivial.
It seems Firefox supports a non-standard fourth parameter, so that won't work either.
addEventListener is possibly the worst organically grown API I've seen in a long time. If anything, its sordid history should be a lesson to the Chrome dev team that fucking around with proprietary extensions (which this most definitely is) just leads to future technical debt and pain for everyone. Why do they never learn?
That isn't an issue if you use a new method name -- I believe "addEventListenerEx" was meant to be read literally. (Appending "Ex" to a function name for an expanded parameter list is a Win32 convention.)
I understand the push from browser vendors to (as they see it) encourage web developers to keep their sites up to date, but it's crazy how little Google and others seem to care about backwards compatibility.
So many websites from 10 or 20 years ago are unusable now -- if they're even still available. That seems to me a bit of a tragedy. It's sad to just write off that entire slice of human history.
Funnily enough, it's the sites that were quick to adopt the hot new HTML and CSS features from 10 years ago, but then were unable to keep updating indefinitely, that were the worst hit. Really old sites using basic HTML and CSS mostly work OK.
<Raises hand> Took about a developer day to fix everything. Drag/drop list reordering, pan/zoom image cropper were the primary things busted. IMO, the change was an unacceptably aggressive move by the Chrome team made with good intentions.
A great example of why I think the entire core ecosystem of the web is backwards.
I've had a number of comments here on HN related to this. Imagine this same scenario, except instead of the User being in charge of deciding which engine is used to render a website / webapp, the Developer is making that choice. What would a developer need to do if they were in control of which "engine" rendered their website on the user's computer?
Instead of being at the mercy of Google / Chrome, the developer of said site could simply change their HTTP Header "X-BrowserEngine" or something like this, and the client's computer would know how to (a) download the new engine if it's not on the computer already (b) sandbox the new engine (c) run the site / app in said engine.
I've called this idea the "Meta Browser" in the past. It's a concept for an app that sandboxes and runs sites on different browser engines seamlessly. The user experience is more or less as though they're continuing to use a single app to browse the web, but behind the scenes could be any number of custom engines rendering the content.
2. Because you're not going to update your detection every time a new version comes out, meaning that new browsers that support the features you need are still going to get served the wrong version of your site.
3. Because there are more browsers than you can count.
4. Because for non-syntax related features, feature detection on the client is usually pretty simple.
For 1, 2, and 3: I figure you just need to parse the UA string to get the actual browser and version number, then have a variant for before and after a given threshold number. You're only doing this for the number of variants of code you want to run/maintain, so this isn't some sort of combinatorial explosion.
It's just another method of dispatch instead of runtime checks inside JS. At worst you put the runtime check in the "I don't know what you are" variant and let them run slower.
For 5: If the browser's lying, you're morally off the hook. The user's configured their browser to lie, and gets what they deserve.
For point 4, fair enough. Go for that when you can.
Browser sniffing falls in the same category as time, crypto, and string encoding for me. If you think it's easy, you haven't spent enough time thinking about it.
Are you referring to user agents? Serving different code based on user agent is considered bad practice, and would be a maintenance nightmare. Not to mention, serving different JavaScript would make it more difficult to reason about what is running client-side. Plus, user agents are neither canonical nor necessarily even reliable sources of information about the browser.
Because it's bad behaviour to rely on browser user agents instead of feature detection. It's what led to IE-only sites and is currently leading to Chrome-only sites.
In addition to the UA spoofing others mention, there's also forward conpatibility.
If your app holds a mini "caniuse.com" UA-feature mapping, you have to keep it up to date infinitely forward in time, or new browsers won't work (think Brave/etc)
Better to detect features directly, and reward anyone who implements them, rather than just hardcode the present incumbents.
The browser only identifies itself by name, not by the features it supports. Having the server decide blindly on what features the browser can handle, would be terrible.
Having browsers identify themselves by the features they support would have been nice, but also very complicated, and it's not where we are now.
This doesn't surprise me. Having spent years working on a very complex web app that's built to run primarily on Chrome (-2 versions), our API/DOM usage coverage was so great that we would often joke that our unit and integration tests were effectively a 90%+ coverage of all of Chrome's functionality, so any bugs or changes they made, we knew about immediately.
I think nearly every version introduced some change that broke behavior and had to be worked around, usually fairly minor and arguably reasonable. Sometimes that's not the case, something would be completely broken and we'd file bugs against Chrome to get them fixed, and our codebase is now riddled with comments about workarounds citing Chrome bugs, some years old.
There was even a recent change to contenteditable that was a breaking change and is not spec compliant, which totally breaks HTML-based rich-text editing systems built on it that don't want to just have a ton of style tags and/or empty tags everywhere in their markup. This API is probably one of the worst I've ever seen (it needs to be extensible and modifiable, you can configure it somewhat but you're left actually taking the output and massaging it yourself if you want anything resembling a good product based on it), so I'd be in full support of a rewrite of the spec and a new version of contenteditable, but as terrible as this one is, it should remain spec compliant.
At one point I held Chrome in reverence for pushing the boundary and improving the QoL of web application developers, but shifting to maintaining anything of decent complexity has made me regret the decision based on how much extra work Google makes for us. They really need to cut out the breaking changes and do better regression testing. If our app detects errors in beta/canary and we report them and they STILL make it to live, I just don't even know what to say. I'm not even sure I agree that just because 0.1% of websites rely on certain behavior that it should be "breakable." With all this talk about progressive web apps, where are the progressive web browsers?
On one hand, I really feel Chrome is more on my side than the average webmaster. I don't want bloated websites, I don't want autoplay videos, I don't want copy/paste blocked, I don't waste text copying hijacked so that a phrase has a referral link to the article, etc. If Chrome breaks a website functionality to over-write these things, good.
The downside is that Chrome is a very big player and by working outside of the system then the rest of the web loses on the spillover benefits.
This is hard, committees are slow, bureaucratic and massively resistant to change - and saying fuck it, I'll fix it properly for myself is a lot easier, and I have little patience for it myself. However, in the long ran this is probably better.
The highlighted comments from the Google agent are very jerkish, but to be honest,
> But in Chrome we’re fundamentally unwilling to allow the mobile web to continue to die from performance bankruptcy. Other browsers are less aggressive, and people who prefer to be more conservative (preferring maximal compatibility over being part of moving the web forward aggressively) should prefer to use a more conservative browser.
is very true.
The mobile web is dying. Native apps are obscuring it while it is made irrelevant by very poor website performance.
> The gist of it: if you mark onscroll/ontouch event listener as passive, Mobile Google can scroll your page faster
onscroll has never been cancelable, and is fired after the actual scrolling takes place. So I don't see how it being passive by default changes anything. Is this an oversight in the article (and a bunch of comments here), or am I missing something?
Reading the comments here make me feel Chrome is like Uber. They may be right in that the standards are outdated and force change. Yet, it breaks all rules and regulations which they shrug off saying the rules have to change.
Again, this is not a comment to choose sides. Just an observation. As with Uber I can’t say if I’m in favor or against. By law, they are wrong. In time, it may turn out they have led change. Funny thing how you can get applauded as patriots for breaking the rules early as long as you were right in the end but critiqued otherwise.
Reading the comments here makes me feel like Google's PR team made its round.
The performance benefit in this has to be miniscule, the number of unmaintained webpages which are irrevocably broken by this has to be huge and the time period from introducing this feature to breaking webpages, which have not yet correctly implemented it, was simply far too short.
Even if you agree with having to enforce this somehow, this rushed execution of that was by all means irresponsible.
Despite that, it seems like 8 out of 10 highly upvoted comments here make no mention of this maybe not having been ideally executed or it maybe not necesseralily being advantageous to users either.
> Which means you can’t practically use the new form without feature detection.
That does not follow.
`{ capture: true }` works in both, since it’s an object, and thus truthy.
There’s no need for feature detection in the case you describe.
Sadly, that’s the whole premise of this article.
It’s only a problem if you want `capture: false` combined with other options — since you’d need to pass in an object, that would be truthy in the old implementations expecting a boolean instead. But then again, the additional options wouldn’t be supported in those old implementations either.
I’m confused — what’s the actual use case that’s breaking here?
In older browsers, useCapture is NOT an optional parameter, and in most cases you do NOT want to use event capture instead of event bubbling. There's no trivial way to support addEventListener(event, handler, { passive: true }) on both new and old browsers without that really ugly feature detection code in the article.
Many tech people learned this in the context of Microsoft in the '90's, but it should be kept in mind universally:
"Embrace. Extend. Extinguish."
Maybe stop placing primacy on the "good/evil" aspects of this. Seems in part to work at a more fundamental level of human and organizational behavior.
P.S. I guess this could also apply to the people who hang ever-more JS off of their HTML skeleton, until our phones have to "boil an ocean" to load a page.
I guess that fits in with the "universal" aspect I described.
> they made all top-level event listeners passive by default.
> Now, this is a terrible thing to do. It’s very, very, very bad.
I disagree. This is similar to popup-blocking, yeah the "API" is there, so lets allow every site to just open windows because they want to?
Executing code during scrolling is a hostile action performed by web developers against users. These listeners shouldn't exist in the first place. Chrome developers sided with users, thank you Chrome developers.
They introduced a backwards incompatible change that causes a lot of sites to lose some functionality, behave differently or outright break.
Now you could say that the user (the developer here) used the feature wrong (I.e. they caused scroll jank), but that's a bit disrespectful - sure there are a lot of developers who had no idea what it'll do to performance, but others that weighed the options and decided that even with the jank the user experience for the majority of users is acceptable.
This sort of thinking is not "moving the web forward." It's the same thinking that created the IE6 problem — favoring browser-specific features over web standards.
Let's be clear here, this isn't a change in favor of users. This is a change in favor of Google, to speed up their browser, and in favor of their advertisers, for better overlay control.
Google is toxic for the future of the internet, but its services are convenient so from time to time a few of us will complain when they inconvenience us, like in this case, but will continue using Google products and completely forget about our grievance with this company.
Case and point, I don't hear anyone complaining about amp anymore, or the fact that when you search for inventor in some US states you are presented with a bunch of irrelevant black people rather than actual inventors like Edison, Tesla, etc. Sadly, like with those cases this story too will blow over and nobody will care about it in less then a week.
where you been? web been broken for awhile now..it's all just duct tape and spit. We're also repeating the sins of the IE days all over again with chrome.. At this point i'm using alternative browsers and only fire up stuff like chrome if i have to
Why the heck would they take out the checkbox on javascript alert box to "not repeat anymore". Since they did, the only option to leave annoying website that pops javascript alerts in never-ending loop is Ctr+Alt+Del.
Jesus Christ, to think there are people in this thread defending what the assholes in the Chrome team did is unbelievable. You make breaking changes opt-in, that is API design 101.
Linus should take over Chrome development.
You can make it opt-in without breaking other people stuff. If a website doesn't opt in their users will migrate to another one that offers the same content with a better user experience. It just won't happen tomorrow (<- common sense, of the real kind).
What really bothers me is that people like you can't figure out that there is a trend behind this and it's not good one.
> If a website doesn't opt in their users will migrate to another one that offers the same content with a better user experience
Can't you see how this logic also applies in Chrome's case: everybody would switch to another browser that does passive listeners by default because it's a better user experience.
What's bothering me is that people like you think they are entitled to not maintain your active web apps on browsers that you didn't even help to develop. It's not about you, it's about the users.
There is a rule against breaking APIs, there is no rule against making a website that is better than another one.
So by your logic it's ok for you to kill and rob a rich person and redistribute all the money because, hey, at the end of the day it's a better user experience for everyone else and if you don't do it someone else might.
EDIT: By the way, browsers are in the business of providing a platform. Platforms should be stable. If they plan on not doing that they should say so. Guess how many developers will stop supporting chrome the day after that?
What rule? Is it that the 11th commandment? How about my rule: Users are more important than API backwards compatibility (because, in the end, we are serving users, not developers.).
It's simple really: killing a person to enhance user experience is morally wrong.
Forcing, in very rare cases (by my own experience), some developers to make a 1 line change so that in effect almost all pages will have jitter-free scrolling is a worthy exchange.
> By the way, browsers are in the business of providing a platform
YES!!! That's the main point of it. They serve (and thus get payed by) users first, developers second.
> Platforms should be stable. If they plan on not doing that they should say so. Guess how many developers will stop supporting chrome the day after that.
They did more than say, the just did it. So let me guess have many web developers will stop developing for Chrome now? uhmmm ..., zero?
If they plan on making breaking APIs changes whenever they feel like their data suggest it's ok, they should clearly say so. But like openly, on the front of the chrome web page: "We care about you, the user, and will break the platform for the developers whenever we think it's necessary to improve your experience".
> It's simple really: killing a person to enhance user experience is morally wrong
Well for me it's morally wrong to break APIs, if you don't agree I hope to never touch anything you ever make.
This is not about caring about the users, it's about doing it without pissing on the developers heads.
By the way, I was not even affected by this, but I know that if we let this slide it will just get worse and worse. As if the web was a nice platform to develop on to start with...
This is my very problem with Chrome/Chromium right now. The Chrome team does assumption on how things "should" be (in a highly subjective way) and breaks the web.
Another example: they decided to ignore the value of `autocomplete` attributes on `<form>` tags [1], because:
> The tricky part here is that somewhere along the journey of the web autocomplete=off become a default for many form fields, without any real thought being given as to whether or not that was good for users. This doesn't mean there aren't very valid cases where you don't want the browser autofilling data (e.g. on CRM systems), but by and large, we see those as the minority cases. And as a result, we started ignoring autocomplete=off for Chrome Autofill data.
Problem: Chrome now auto-fills wrong parts of forms with username/passwords and this breaks forms that get unexpected data when submitted. And now, they opened an issue on their tracker [2] to track "Valid use cases for autocomplete=off".
This is insane to think that the developer is wrong to use some attributes values, and to assume how a page should behave, ignoring devs intentions and Web standards.
[1] https://bugs.chromium.org/p/chromium/issues/detail?id=468153...
[2] https://bugs.chromium.org/p/chromium/issues/detail?id=587466