Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Please consider not adopting Google WebComponents (palemoon.org)
440 points by XzetaU8 on March 17, 2020 | hide | past | favorite | 251 comments


Some context that could possibly make the reasoning behind this more clear.

Pale Moon is a fork of Firefox. The purpose of the fork was to retain support for various legacy technologies that mainline Firefox was abandoning, such as NPAPI plugins, XUL/XPCOM, and old-style themes.

If keeping up support for those old technologies and keeping up with the latest improvements in Firefox sounds like a lot of work for a small team to tackle, that's because it is. As a result, Pale Moon doesn't include all the performance improvements that shipped with Firefox Quantum, along with various other "modern Firefox" improvements like not running all tabs in a single process. (Pale Moon last rebased itself on mainline Firefox circa Firefox 52, which shipped three years ago.)

My suspicion is that, when Pale Moon asks you not to use Web Components, they're doing so because their engine is old, it doesn't have the necessary muscle to deal with things like shadow DOM in a performant fashion, and they don't have the resources necessary to give it that muscle. They could rebase onto a newer version of Firefox that did have that muscle, but they'd lose support for all the legacy technologies they forked away to keep in the first place; and re-implementing all those APIs on their own would be just as prohibitively large a project as tuning up their engine on their own would be. So, in the absence of some huge population of skilled developers volunteering to pitch in for free, their project is kind of stuck between a rock and a hard place.


> My suspicion is that, when Pale Moon asks you not to use Web Components, they're doing so because their engine is old

They address this:

> Full disclosure: While it is absolutely true that it is in our direct interest that web developers don't use something we are still working on implementing ourselves (considering our limited capacity as a smaller non-profit community, which is likely the same for any other true Open Source/Libre projects out there), the impact of what is outlined above is much more far-reaching than just our own projects; not only is it pushing for a proprietary web that is vendor-locked, it also, as stated, becomes something that will be impossible to archive, save or process.

So, yeah, they have special interest, but that doesn't take away from their point.


It takes away from the point when it's untrue. Web Components is a standard. Multiple implementations exist. They don't have one and are sore.


This isn't the first time the topic of browser monoculture has been brought up. Why does it matter that it was Pale Moon that brought it up this time? If I or anyone else would have written the post instead of them, would then people focus on the issue instead of on who brought it up?

> It takes away from the point when it's untrue

What exactly are you saying is untrue? Can you quote the part?

There are comments[1] right in this same thread that confirm Pale Moon's statements about the inability to save or archive pages.

> Web Components is a standard. Multiple implementations exist.

Are you sure it's not Google that's controlling what's to become "standard" and Firefox that chases after them? And I say Firefox specifically, because, besides Safari, they seem to be the only other implementation on the market. Edge uses Chrome's engine. Indeed, with Chrome's 64% in market share[2], anything they do becomes "standard" automatically. With Firefox's 4.5%, I don't think they can do anything but chase after the Chrome guys to wherever they decide to take the web.

Isn't the point of a standard to help foster a rich ecosystem of diverse, interoperable implementations? I feel that a standard that evolves too fast (as made possible by an implementation monoculture) fails at fostering one.

This is what I believe Pale Moon is getting at when they say:

> The more additional "features" are tacked on to these components, the less likely it is for non-Google clients to be able to display sites in full or properly.

It gets harder and harder to make a new web engine from scratch. It's a step backwards from gaining a more diverse browser ecosystem.

The fact that Web Components specifically makes it harder to parse content (for archival or indexing purposes, etc.) is a separate issue, and for it, it doesn't matter if it's standard. In fact, it makes it worse.

[1] https://news.ycombinator.com/item?id=22605069

[2] https://gs.statcounter.com/browser-market-share


Some of my browser extensions depend on querying the DOM, and I'm also worried about not being able to fully inspect web pages that use closed shadow root elements.


WebExtension in Firefox get access to open and closed shadow roots: https://developer.mozilla.org/en-US/docs/Web/API/Element/ope.... I think there are open bugs for some further improvements.


How does Pale Moon maintain security for these legacy technologies?


I'm not sure they could, even if they wanted to. The security problems with those technologies aren't due to implementation goofs. They're fundamental issues with the design of those platforms, all of which were designed back in the '90s, when we didn't appreciate just how dangerous an environment the Internet could become.

There's no way to fix issues that run that deep without breaking backwards compatibility with all the existing code written for those platforms, which would negate the entire point of the exercise.


> all of which were designed back in the '90s, when we didn't appreciate just how dangerous an environment the Internet could become.

No, it was in the 2000's and 2010's. And even more than 10 years ago, people were aware of how Internet can be dangerous, as shown by Internet Explorer.

Moreover, Firefox had sandboxing before Chrom* and Electrolysis:

1. https://developer.mozilla.org/en-US/docs/Archive/Add-ons/Sec...

2. https://developer.mozilla.org/en-US/docs/Archive/Add-ons/Dis...

3. https://developer.mozilla.org/en-US/docs/Archive/Add-ons/Int...


The "sandboxing" you're referring to was very limited in scope. It essentially amounted to not giving content scripts any references to non-content objects like the browser chrome; poorly written extensions often broke this "sandbox", and it provided no protection whatsoever against browser exploits -- any arbitrary code execution exploit would allow an attacker to run unrestricted code on the user's system.



How do pigs fly?


They also maintain an in between version. https://www.basilisk-browser.org/


If the core problem is that you don't have enough resources to do everything you set out to do, splitting those resources between two separate products doesn't help matters.


Exactly my thought... How is fragmentation exactly gonna help this?


Well, if I as developer would realize that i pose my users under certain risk and i am unable to do anything about it, then the only logical and morally correct option would be, to tell that my users and cease operation.

Everything else is selfish and delusional!


I tend to agree, but if the Pale Moon developers agreed they would have shut the project down long ago. The legacy APIs they're clinging to are loaded with security risks -- that's part of why Mozilla ditched them in the first place. And there's not much evidence that Pale Moon et al. are doing anything substantial to remove or mitigate them.


> and cease operation.

I hate when that happens. I choose the applications in my workflow for specific reasons or for some unique value that they offer. It's one thing when a developer moves on from a project and leaves it in an unfinished state. It's an entirely different story when the developer basically throws up his hands and tells their users they give up because xxx is too hard. Just tell me what's wrong and let me decide! Then if your userbase crashes you can make the decision if you want to continue or not. Deciding for yourself what's important to me is extremely frustrating as a user. Kinda like when Microsoft deprecated the SmtpClient Class in .NET and told people to use MailKit instead.

So now it's taboo to use the perfectly good Microsoft class which only takes 6 LOC to send an email and everyone is crapping their pants trying to make 300 LOC mailkit scripts to perform the same task. Because some clown at Microsoft got a hard-on over MailKit and decided to burn their own house down.


.NET Core? We are still mostly on .NET Framework land and happily using SmtpClient.


Yes sounds like they should just cut their losses. They gave it a try and discovered you can't maintain a fork of a browser that easily.


[flagged]


I reckon Lord_Lestat is about 12 years old.


[flagged]


The political propaganda in your post, e.g. "radical left", "leftist fanaticism," makes it lose all meaning. Sadly I didn't see a flag button. I would hope that posts like yours are not welcome in a technical discussion.


There is a difference between discussing something and the fascist haters of customization these days.

It is a fact that most people are either using Firefox OR Chrome and sharing such a toxic opinion. And that is not what the right-wing side is spreading, this opinion you can find majority-wise at the left-winger audience.

Therefor Lestat is right and this has to be mentioned, as there is zero reason that such disgusting fascism just should not be ignored.


Would you please stop?


Say a farmer notices they accidentally dropped their rat poisson into the milk. Should they still sell it?

Assume it comes with the warning "this will kill you", printed large enough to inform exactly the same fraction of customers as PaleMoon users who have the competence and information to make an informed judgement regarding its security.

Also: this has absolutely nothing to do with "leftism" and "dictatorship", and "conformity". Except in the sense that those terms have apparently become synonyms for "not doing stupid things that will harm you" to people prone to use all-caps.


If you don't have the resources or expertise to provide those features in a safe fashion, you're not doing the user any favors by going ahead and providing them anyway.


It doesn't make sense to me to ask developers to not use a newer technology that can make their lives easier and affect bottom lines for business just so we can accommodate a single project with likely a very small set of users. I'm surprised they would even think that is an OK thing to ask.


Reminds me of the invocations of old to ensure that websites support disabling Javascript, and/or text-only browsers like Lynx. While there are those who still (understandably) want these things, I think we all realize it's largely a lost cause at this point.


Google, Microsoft, Apple and even Firefox have an incentive to reduce competition by making the web as complex as possible. If the web is to be free, it must be radically simplified. To call that a lost cause is just a self-fulfilling prophesy.


I don't disagree (although I'd be curious to hear your rationale on Mozilla; their incentives are... curious). I suppose what's hard to not view as a lost cause is the idea that accessibility to non-JS/Lynx/etc is best-practice or the default. But there are counter-examples of less-is-more good citizens (Craigslist and HN itself come to mind).

As flawed as browsers (and their vendors) may be, I come at it from the other direction: from the perspective of end-user adoption, the open web and federated email are our last two bastions of open computing ecosystems of any kind, holding back the tide of walled-garden apps and clouds and such from taking over completely. While radically-simple "good citizens" of the open web may not be the norm, at least they're possible, which can't exactly be said for the computer-as-appliance model.


Anyone can write an open letter. Whether somone acts on that is another matter.


Other way of looking at it is that google is the new Microsoft and play the Embrace, extend, and extinguish game.

You are attacking the post ad hominem. But, your comment does not change the validity original argument.

I’m interested in knowing your opinion about the arguments, thou.


> My suspicion is that, when Pale Moon asks you not to use Web Components, they're doing so because their engine is old

Your suspicion is entirely wrong.

1: https://forum.palemoon.org/viewtopic.php?t=22270

2: https://forum.palemoon.org/viewtopic.php?t=22399


These are both basically MoonChild saying "it's not true" which doesn't change whether it's true.

MoonChild also told everybody that Let's Encrypt is a terrible idea, untrustworthy and nobody should use it. How did that end up?

I will give them props for actually doing it (plenty of people went "I'll just fork it" when Mozilla decided to change things but very few of them actually did all the resulting heavy lifting) not just talking about it. That's not nothing, but it doesn't make them right.


Neither of those really even contradict what smacktoward said, much less disprove it. In those posts, Moonchild disagrees with the idea that Pale Moon is "a rebranded rebuild of an old Firefox version" and more broadly that it corresponds to any particular Firefox version since there has been years of parallel development. He also affirms that Pale Moon has kept up with other browsers in security. It is entirely possible for all of that to be true and for smacktoward's suspicion — that Moonchild's primary problem with Web Components is that they work poorly in Pale Moon and it would be a lot of trouble to fix that — to be true.


> It is entirely possible for all of that to be true and for smacktoward's suspicion — that Moonchild's primary problem with Web Components is that they work poorly in Pale Moon and it would be a lot of trouble to fix that — to be true.

It is not true

for the simple fact that the next milestone release (v29) of pale moon will support WebComponents

> Neither of those really even contradict what smacktoward said, much less disprove it.

smacktoward repeats the same tired oversimplifications that PM is just "rebased itself on mainline Firefox circa Firefox 52" both my links prove that the above statement is fundamentally wrong.


> Your suspicion is entirely wrong.

In what way? Could someone summarize these sources?


What a weird polemic. "With Google WebComponents here we mean the use of CustomElements and Shadow DOM, especially when used in combination, and in dynamically created document structures (e.g. using module loading/unloading and/or slotted elements). ... WebComponents used "in full" (i.e. dynamically) inherently creates complex web page structures that cannot be saved, archived or even displayed outside of the designated targeted browsers (primarily Google Chrome)."

It's a web API. Browsers implement it. Including Safari and FF. If the argument is that it's hard to implement and makes it harder for new browser implementations, the same could be said of new CSS features, other APIs like Web Audio, etc. Trying to brand Web Components as Google-only may have worked in the v0 prototype days in like 2014, but it's factually not the case today.


It's a web API, but it's a web API that requires development work to do, and Goanna (Pale Moon's pre-multiprocess, pre-all-the-things-that-make-modern-Firefox-good Gecko fork) would need to implement those APIs.


Just ran checksec on Pale Moon's Linux binary distribution - their binaries are built with no mitigations whatsoever. ASLR disabled, no stack canaries, no fortify.

This whole project appears to be a security disaster and nobody should use it.


They're generally pretty unprofessional.

https://github.com/jasperla/openbsd-wip/issues/86


I feel for the OpenBSD developer who apparently expected the project lead to be a reasonable grown-up, but ended up slapped in the face.


That exchange borders on unbelievable for me. How any project could be so tone deaf is beyond comprehension. And the initial message from the PM project is more than enough to set anyone off.


This issue is now officially resolved. There will be no Pale Moon browser, official or not. The port has been removed. Farewell, petulant children.

Beautiful response.


Petulance is independent of age.


This read was amazing, and I would fully recommend it for someone self-isolating with a hot cup of coffee who wants a jolly read about how not to communicate with others.


Then there's this one from privacytools.io where they disregard criticism as "Fake News" https://github.com/privacytoolsIO/privacytools.io/issues/375... and get all upset at people for "shitting on them". (Note mattatobin is also one of their devs).


yikes. That's about the most insane exchange I've seen on Github.


Why was it used at all if it's basically an old fork of Firefox? Just curious as I've never heard of it before.


"Customization." It retains the older, Firefox ~25-ish UI. They've since upgraded the backend to Firefox 52, which is still Really Old. And they've got a lot of culty behavior around their community, if that's your bag.

If you absolutely must have an XUL-based Firefox derivative: 1) you don't, 2) stop, and 3) don't use this one if you don't listen to #1 or #2.


[flagged]


Yes, I have seen your frothing Reddit posts, you don't have to repeat them here. "Leftist fascist"--can I give you a bit of free advice? If you have to modify "fascist", everyone and their dog knows it's because you have to modify your slur to make the mirror a little more palatable. Maybe find some new ones.

But I reject the square-peg premise you're trying to jam into that round hole anyway, as it's not a democracy, either. Your choices impact your neighbors without their consent. It's not about fucking over power users or whatever the weird little epistemic enclosure you come from wants to insist it is, it's about addressing technical debt both in terms of performance and security--and the latter matters quite a lot in terms of not enabling people who think they're power users to 1) blow their own leg off, and 2) blow the legs off the people next to them. That's why those "leftist fascist" evil scary Firefox people decided "hey, let's patch some of these up."

Customization, in and of itself, is fine (modulo the combinatoric explosion that leads to security holes). Few enough people would disagree with that, though I understand that the aforementioned epistemic closure holds as axiomatic that everybody just hates the tweaker types. On the other hand, customization at the expense of competent security development, which is what you get with Pale Moon, is not. And seeing as how there certainly doesn't seem to be much in the way of developer competence, to the point where "don't use the stuff that upstream has working just fine because we can't figure it out" is a legitimate attempt at "discussion", it ought to be dumpstered.

Somehow, this "power user" (god, I hate that term) gets by just fine with poor, benighted Firefox--having switched to it from Chrome/Safari after Quantum because it finally felt as pleasant to use as back in The Day. Clearly, though, that just makes me a radical-left SJW hater or something. But that's fine. Modern Firefox is heckin' great.


Looks like the sock puppets have arrived, quoting myself from that thread:

https://github.com/privacytoolsIO/privacytools.io/issues/375...

> There also seems to be a small group of devout followers who travel from the Palemoon forums to other parts of reddit, HN and github to spruik the project and derail criticism. I've observed (without naming names) it's the same names doing it over and over again.

They always use the same phrases too "nazi" and "fake news", "false narrative" etc. They tend to refer to themselves as "power users" with a clear tone of superiority to their comments.

I guess it must be the deep state! (sarcasm).


[flagged]


Users sure as hell didn't leave for BlueMoon or the other also-rans.

(And nobody is falling for your "leftist Nazi" shtick. It's quite telling that you can't even come up with terms for your imaginary opponents and have to fall back on misusing "fascist", "nazi", and "dictator", isn't it?

Here, in the interest of the English language, are some actual left-wing baddies you can use: Bolsheviki, Stalinist, Stasi, Politburo, Khmer Rouge, ETA (potential for puns here), (Popular|Democratic|Worker's) (Front|Army|Committee|Union) for (Liberation|Democracy|A Better Future|Legalisation)


Help. Police. Murder. Stop. Don't. Come back.

https://www.youtube.com/watch?v=W9ZD3_ppcPE


The real security disaster on the modern web is commercial groups taking over setting standards and turning the web browser into an OS with all the security problems that entails.


The opposite is the case - web applications are one of the best things to happen, security-wise, in a long time.

Web applications are fully isolated and sandboxed, have fine-grained permissions, are easy to inspect, and the runtime is built with a modern threat model.

ChromeOS is probably the most secure desktop OS for this reason.

I want my browser to expose more functionality to web apps, because it means that I have to run less random unsandboxed code on my underlying OS.


I'd like it more if my underlying OS provided these features instead of running an OS on my OS.


We need both - some kinds of applications need direct hardware access. But the kernel attack surface is huge, even with seccomp and friends.

An app on my smartphone or - much worse - an Electron app "sandboxed" in a flatpak on my desktop has access to far wider range of dangerous APIs than a web application. What's wrong with a browser as a high-level OS?


It's mostly the layering that bothers me.

I don't mind Chrome OS and love my chromebook.

Some of this is aesthetic so I don't really expect to change minds, but if we lived in the world of "Life and Death of Javascript" and booted to some kind of Web OS I'd be annoyed at the loss of low level hackibility and get over it.

Booting to Linux, then booting a browser to get to a normal app that doesn't need network connectivity "feels" wrong.


Booting to Linux is just because Google did not want to start from scratch.

If they took a Xerox approach, ChromeOS would have a tiny mikrokernel, hypervisor type 1 style, and jump directly into Chrome.


I agree, I'd like it if all app platforms were portable, secure, and linkable by default. Given that they aren't, my allegiance is with the web.


A reasonable view.

I value discoverability and comprehensibility of the underlying platform a bit more, and have recognized that isn't likely to happen any time soon.


We had Java....


> are easy to inspect

Until they're delivery vehicles for obfuscated wasm to canvas rendering applications. Then nothing of the "web as graph of hypertext documents" will be left.


There's been a lot of ridiculous FUD about "DRM will be used even for text!!1" (wrt. EME especially) and nothing of that sort has materialized.

(also, wasm changes nothing here, you could always obfuscate js just as much)


> wasm changes nothing here, you could always obfuscate js just as much

It is a significant change because it lowers the bar to create a blackbox. wasm offers the performance, canvas provides an opaque, flexible render target. Without either you're limited to obfuscating your JS (which indeed already happens) and obfuscating your DOM (also happening). But the DOM still leaves enough surface for adblockers and other extensions to intervene. Perhaps throw in a websocket/webrtc to channel all your data over a single connection and you basically have created a single intransparent blob which extensions cannot interact with on the behalf of the user.

You turned the user agent into the site's agent.

> "DRM will be used even for text!!1" (wrt. EME especially)

I am not aware of EME offering a data path to bring encrypted text to the screen. Without such a path these claims have no merit, wasm + canvas on the other hand offer a clear path.


Go to nasa.gov with javascript off. Tell me how much text you can read. It doesn't have to be DRM. It just has to be ever more complex JS standards and engine implementations that only a handful of companies can actually make.

Once it's an application instead of a document the text just isn't there.


Flipboard made react canvas to render directly to canvas instead of the dom. https://github.com/Flipboard/react-canvas


Websites are at least supposedly are sandboxed so they are not as much of a risk as running native binaries. But this is getting worse and worse though as browsers expose more and more of their host operating system's functionality. The benefits of using a website instead of a native app are quickly disappearing, while the drawbacks have only been somewhat mitigated. We're getting to the point where browsers are worthy of the decades old criticism Emacs has received. They have eventually become an OS with many fine features - simply lacking a good web browser. For the privacy (and security) conscious user, modern web technologies will undermine you every step of the way, or simply break if you choose to stand your ground.


Sandboxes are security features that have to implemented though, and if implemented incorrectly will create vulnerabilities. I’m sure google is on top of these but these random unheard of browsers need more scrutiny


The thing that kills me is that they forked a pretty bad (at the time) browser core to make Goanna, then the parent of the fork got way better and left them in the dust.

I would actively block Pale Moon if I thought I worked with people silly enough to use a slow single-process browser in 2020. (Instead of a moderately slow multi-process browser. But I digress.)


Just empirically, browser security has improved dramatically over the last decade or so. When did a browser exploit last cause actual real-world damages? I remember network infrastructure compromises, social engineering, fake gmail logins, etc, but the weekly flash or PDF worm is gone, and JS seems to be holding up remarkably well given its size and complexity. Aluminium Centrifuges seem to also be more vulnerable these days than Chromium Browsers.

Somebody seems to be doing something right.


> their binaries are built with no mitigations whatsoever. ASLR disabled, no stack canaries, no fortify.

Provide evidence.


Anyone can easily verify this by downloading the binaries, and running checksec themselves.

Here's the output:

    $ sha256sum palemoon-bin
    0b7f4ad73fc671e20bfb2366aa9d9ad81e82a3c8f63acde7f37063e73fda2141  palemoon-bin

    $ checksec --file=./palemoon-bin --output=json --extended | jq 
    {
      "./palemoon-bin": {
        "relro": "partial",
        "canary": "no",
        "nx": "yes",
        "pie": "no",
        "clangcfi": "no",
        "safestack": "no",
        "rpath": "no",
        "runpath": "no",
        "symbols": "no",
        "fortify_source": "no",
        "fortified": "0",
        "fortify-able": "15"
      }
    }
Notably, the binary has not been compiled with -fPIE and -fstack-protector, which disables two very basic exploit mitigations - ASLR and stack canaries. Any self-respecting Linux distribution enables these compiler flags by default nowadays. -D_FORTIFY_SOURCE is also missing, unlike Chrome or Firefox. A modern browser uses dozens of custom binary exploitation mitigations in addition to this.

Mitigations and sandboxing are the difference between an exploit a CTF player can write on an afternoon, and a multi-month expert-level endeavour.


Which version is the `palemoon-bin` that you have there?


It's 28.8.4 x64, downloaded straight from the website. (I duplicated his efforts; tbf, you could have, too.)


> It's a web API, but it's a web API that requires development work

Care to name one web API that doesn’t require development work?

Web standards evolve, new APIs are introduced. News at 11.


Yeah, that's my point. Pale Moon's developers forked a complex project and are now asking people to not use things that the mainline project now supports just fine.

Checks written, can't cash, etcetera.


Fair enough, should have sensed the sarcasm.


There are too many APIs released at neck-breaking speed, often poorly designed and rushed.

See Google's own Web Api tracking page [1] Chrome adds almost 1000 new APIs per year Only last year they added over 500 APIs that are not present in other browsers, and pretend they are standard and will not remove them even if other browsers are not going to implement them due to multiple explicitly voiced concerns.

So yeah, no. Your news at eleven is misleading at best.

[1] https://web-confluence.appspot.com/#!/confluence


> Definition: API: For the purposes of these metrics, an “API” is an interface name + attribute or operation pair.

So there aren’t 1000 new Web Component specs coming out each year, no. And fewer make it into W3C/WHATWG specs.


No. But there are 1000 new web APIs that you have to implement yearly just to keep up. And Web Component specs are a part of that.

And then despite all the objections from all the other browser vendors Google releases an API in production and will not mark it as experimental. Yes, a Web Components-related API (Adopted Stylesheets).


the same can be said about any new CSS features and other new stuff that goes into the browser. browsers are evolving rapidly, either to make devs work easier or make users' work easier or just to make everything prettier or faster.


> Trying to brand Web Components as Google-only may have worked in the v0 prototype days in like 2014, but it's factually not the case today.

Even Mozilla is having problems with Google WebComponents. There are 100 bugs open in Bugzilla about it.


Your comment doesn’t seem to be related to the text you’ve quoted. “It’s difficult to implement” is not related to “it is owned by Google”.


YouTube uses WebComponents, and its pages are unviewable with Wayback Machine: http://web.archive.org/web/20200317150441/https://www.youtub...


Note sure, but doesn't AMP have "custom elements" which are not just CustomElement+Shadow DOM but google pseudo-proprietary "pre-loaded/created" elements?

Anyway it's nevertheless troublesome that google has so much influence that they can just push new web features without much discussion of other developers/browser vendors. Due to differences in internal design some features can be simple to implement for google but hard for everyone else (or the other way around, in which case google can just reject them by simply not implementing them).

EDIT: Also the non-JS web is basically dead. People should to come and accept that even if it's not perfect.


> EDIT: Also the non-JS web is basically dead. People should to come and accept that even if it's not perfect.

No. I won't accept it. The JS web sucks. It's slow, bloated, provides nothing of value to me the user, and exists only to feed the maw of advertisers and platform providers who will never be satisfied with the tech we have.

HTML pages are still faster to use and 1000x easier to deploy than all this "modern" web crap that delivers less and less for more and more. Look at the site you are on!

I've never heard about this Webcomponents thing before today. I hate it already. No I'm not willing to give it a chance. I am certain, certain that it will deliver nothing but pages with 100K of actual content which need 10MB code downloads and 20s rendering times on an i5.

Web designers, developers, and especially browser developers are destroying the world wide web and replacing it with a glorified series of custom-flash apps. It's every bit as terrible as when flash was around no matter how much spin is put on these "standards". Google is the new Adobe and your website is awful again.


You must not use any business software because that's where JS is making a killing. People rain on JS because it's abused on content sites who don't need it, but that doesn't mean it doesn't have massive utility for solving complex software problems requiring a high degree of interactivity which a static HTML form could never provide.


>Also the non-JS web is basically dead. People should to come and accept that even if it's not perfect.

This is what it all comes to. Unparsable, terrible-to-use websites that are also not compatible with text-to-speech is the future because ad companies demand it. Users are complete non-entities in this decision.

Right now Gmail's HTML version is both faster and lighter than the regular version. The only advantage being not having to reload the entire page to load mails (which is still faster on the HTML version.)


All of that is window dressing over custom elements polyfilled by document-register-element[0] and Preact[1]

Its a giant SSR machine, from what I can tell. Also web workers are leveraged to do alot of the heavy lifting when it can't be SSR'd

[0]https://github.com/WebReflection/document-register-element [1]https://preactjs.com/


Even worse.. the Pale Moon guys are unskilled amateurs of the worst classification - the kind of developers which activity should be force-closed to protect others of their unintentionally spread incompetence which put tons of other people at high risks.

Not alone that, they are rude and shit into the face of a big part of the Linux community with threatening them in the worst way possible.

Then we had a server hack with compromised binary files....

People should be made aware of Pale Moon and their amateurish activity, so they can immediately uninstall this danger-ware from their systems!


They have blown things out of proportion (see parent comment). Only extremely minor browsers will be affected. Chromium (Chrome, Edge, Opera), Quantum (FF) will be fine. I suspect even the likes of KHTML will add support for it.

Pale Moon is a fork of Firefox made before the removal of XUL (as I believe is their selling point). They would have to back-port Web Components (which may be impossible due to Quantum) or write it from scratch. This is all really hard for them because they likely have fewer than 10 active developers.


Can't they just ship the browser with js webcomponents polyfills?


Back when I still had to support bleeding-edge web apps more than a few of my bug reports were from Pale Moon users because the devs couldn't configure their compiler properly and were always behind on bug fixes. Impossible to act on that stuff, though I tried. Once I realized how bad PM's security story was I gave up on supporting it, not gonna expose my development machines to that.


[flagged]


Firefox did not have process-level sandboxing till Quantum. Exploit a JS bug in the renderer and you had code execution on the host OS.

"Fake news" accusations aren't welcome on HN, people here prefer to disagree amicably.


Exactly, I don't understand what do they mean by Google WebComponents. CustomElements v0? v1? both?


Both, anything, everything. They're an ancient outdated (==insecure) fork of Firefox and they can't implement any new feature themselves.


Thanks for clarifying. While reading this I was under the impression that they were referring to specific web components created and hosted by Google, and I further assumed that Google denied crawlers access to the web component modules (hence closed). Thanks for making it clear that is not the case.


It’s a document api that, when used, can’t be saved as a document. Pretty simple.


Like it was said already... a lame excuse for "We can't successfully implement it, our browser is becoming incompatible with a large part of current web, so please do not use it to make us happy"

Without doubts - this will be Pale Moons final run - as the makers are fully unable to implement complex features on their own, as they do not have the tiniest bit of knowledge how to implement medium or highly complex Ecmascript features.

This link explains everything with all necessary details - also shared before: https://www.reddit.com/r/palemoon/comments/fk4fnl/attention_...


Nonetheless, its still mildly concerning that smaller players in the browser space are finding it difficult to keep pace with all the new APIs being introduced to the web these days. Even Microsoft sort of gave up and killed their browser engine in favor of a Chromium fork.

It's a tricky situation with no obvious solutions. The Pale Moon dev's plea to slow down isn't going to work; devs want these new features and they're not going to stop using them just because a smaller browser developer with no significant market share can't keep up.

There is, perhaps, the beginnings of a solution with the Extensible Web Manifesto[1], which aims to shift the focus of browsers from implementation of many high-level features to instead focus on smaller, simpler low-level features which high-level features can then be built on top of. (Though ironically, web components are a part of this movement.) CSS Houdini[2] is one such feature which seems like it might be relevant in this specific situation (though whether it's helpful I have no idea; in the short term I suspect it might just translate to a lot more work).

[1]: https://github.com/extensibleweb/manifesto

[2]: https://developer.mozilla.org/en-US/docs/Web/Houdini


All the more reason to cherish Firefox.


firefox is in the sites of Google, don't worry about that. They will find a weakness and exploit it to try and become the only "true" browser.


> devs want these new features

Devs wanted Flash and real-player plugins pack in the day too. They wanted ActiveX plugins too. No-one else wanted them and it was ultimately the downfall of IE.

> Microsoft sort of gave up and killed their browser engine This will be the downfall of chrome. Google will ultimately be fined under anti-trust, chrome funding will dry up and browsers will stop supporting these absurd features. Have a nice time rewriting all your webcomponents in javascript in 5 years.


> Devs wanted Flash and real-player plugins pack in the day too. They wanted ActiveX plugins too. No-one else wanted them

The idea that users didn’t want Flash is laughable. It was enormously popular and enabled all kinds of multimedia presentation that the web couldn’t come close to rivalling.

> Have a nice time rewriting all your webcomponents in javascript in 5 years.

Web components are JavaScript. And it’s a standard agreed upon by all the major browser manufacturers. They aren’t going away, no matter what happens to Google.


Had it not been for iPhone, and we would all still happily using Flash, instead of waiting 10 year to still catch up with what was possible in 2011 regarding 3D on the browser.


Small players were never viable in the space. If they were people would care about, for example, Opera. Instead, probably 12-ish people care about Opera.


> devs want these new features

Except that they didn't asked for them, as shown there (and note that Google removed <style scoped> from Chromium):

1. https://github.com/whatwg/html/issues/552#issuecomment-17810...

2. https://groups.google.com/a/chromium.org/forum/#!searchin/bl...

Also, even Mozilla is having problems with Google WebComponents. There are 100 bugs open in Bugzilla about it.


But what are they afraid of?

I can run Firefox or whatever for modern websites, and Pale Moon or 10 year old sites running deprecated technology.

No one will ever build a website using WebComponents and NPAPI, so what's the problem?

The only possible explanation I see is that Pale Moon team desires and thinks that Pale Moo can become more popular than Firefox or Chrome simply by hoarding their discarded old tech. It's delusional, like selling computers with parallel ports and VGA cables and begging people not to use DislayPort monitors.


> No one will ever build a website using WebComponents

WebComponents is adopted by Chromium (and with difficulty by Firefox).


I think they meant that no one will ever build a site using WebComponents and NPAPI together, since NPAPI has already been discontinued before WebComponents has taken off.


I think you missed an association there. (WebComponents AND NPAPI)


Imagine they started working on hypermedia browsers 30 years ago, and were now admonishing everyone for not sticking with standard SGML. Yes, I'm sure DSSSL had a way of doing scoped styles. Scheme has scopes, right?


Side note: from reading the comments on that Reddit, it sounds like there's some weird affiliation with the 8chan alt-right crowd in the Pale Moon community?


I don't know if it's 8chan, but there's a really weird cryptofascist--barely even "crypto", tbh--strain in that community. And I can see why. The community that huddles around outdated and bad tech for the reasons they do is going to skew regressive and is going to get progressively (ha!) more upset and resentful over time. That self-selects for the kinds of toxicity that share significant comorbidities with the "alt-right" and worse groups on that axis.


I've noticed this too, and I've also noticed it stems from their team members. https://news.ycombinator.com/item?id=22614144


Ahh Good ol guilt by association https://en.wikipedia.org/wiki/Association_fallacy


I had never heard of PaleBla before today. After seeing the BSD issue mentioned above, a short visit to their forum, and what’s being said by their supporters here I can unequivocally say it’s the most toxic and incompetent community in tech I’ve ever seen.

It’s weird and cult-like, but without drugs, dancing, or women. It’s sort of alt-right, hating gays and foreigners just as much as post-2008 JavaScript, but there just isn’t any reasonable model explaining how that connects.


Simply an excuse for "We can't successfully implement it, our browser is becoming incompatible with a large part of current web, so please do not use it to make us happy"!

Result - they are going to die and losing hope and faith, like the link here describes. Read especially the IRC link where they discuss things like that with their "ecmascript guy" Gaming4JC - who has the task... WITHOUT ADDITIONAL HELP" to make Pale Moon "COMPATIBLE WITH TODAYS WEB" - honestly... what a joke....

More here: https://www.reddit.com/r/palemoon/comments/fk4fnl/attention_...

Why is this the truth? They failed in the past with implementing stuff on their own, and they had to use new Firefox variants to re-base their UI towards them.

As there is not a way of going forward anymore without losing all their customization abilities, they are screwed as webcomponents is such a breaking point in time again where they can't succeed on their own because of their team missing guys who successfully would be able to implement stuff like this - uses Rust/Stylo/Servo - and that is way too complicated to them and they miss all the necessary knowledge how to convert it to their current codebase of Firefox 52!


Even if that reddit post is 100% correct, the author's behavior strikes me as the epitome of trolling. So I'm not inclined to take their words particularly seriously.

The author basically walked into a community of Pale Moon enthusiasts and shouted: "Hey Palemoon fans! Y'know that thing you love? It's going to die! Sorry!".

The legitimately-concerned version of this post would go something like: "If Palemoon can't implement web components, does the browser have a future?"


Well, it COULD mean the end of this browser, which i would not call a big loss, as it was always outdated and the makers ripped out vital security features like sandboxing or they have been unable to implement Mozilla implemented tor based privacy fixes!

It is better that the fans are learning WHAT COULD HAPPEN - because if they would not learn the truth, they are suddenly sitting around with a dead and even more outdated browser.

This is too important than to ignore it or not to post about it.


By some strange coincidence maintaining a working web browser became a task that only Google can do "properly". Microsoft gave up. Firefox is struggling. Doing something from scratch is insanity.

So here we are. A website choke-full of people who constantly post about modulariy, protocols and advanced type systems, yet sheepishly accept that a browser with its zillion unrelated features is a giant, tightly coupled blob that cannot be tackled by anyone except an ubercorp like Google.


This is very true. Google goes off cowboy coding and says "well we're the big dogs now, and what we say goes" and think everyone has to follow. They don't care about standards, they care about being the last person standing because then they can do what they want to your browser and what goes in your eyeholes.


They can do it because they have the market share and are the standard for all practical purposes. When MS had the market share they were able to do the same. And I'm sure Mozilla would not look like they're struggling maintaining the browser if they had a commanding lead that made them the standard.


How do you mean that Firefox is struggling? It is not my experience this is so. Everyone around me is ditching Chrome for Firefox.

Firefox is also ahead in a few areas too: containerisation & privacy. It's also now the default browser on a few oses.

To me this does not indicate struggling at all.


https://gs.statcounter.com/browser-market-share

Chrome: 65.54%

Firefox: 4.58%

Perhaps your sample size of people around you is too small?


You might well be right, but I, along with lots of other firefox users run extensions like NoScript which blocks 3rd party javascript unless explicitly whitelisted, so I expect a much higher proportion of firefox users probably don't show up on analytics packages (which is how statcounter stats are generated). DuckDuckGo Privacy essentials plugin also blocks statcounter, as I'm sure do many other privacy related plugins.

Even if you are a firefox user who doesn't use third party plugins, firefox itself will block statcounter if you have enhanced tracking protection turned on. I couldn't find a similar setting in Chrome.


Unless you're spoofing your User-Agent string, servers still know what browser you're using. You don't need to be running client-side analytics scripts for browser market measurements.


Sure, but that's not how statcounter or most advanced analytics packages work.

Your browser doesn't make a request to the analytics server if you're blocking it, so while the content server might know the user agent, the analytics server generally doesn't.


I'd think it's a fairly safe bet that noscript users aren't a statistically significant part of total web users


The marketshare might not be great, but it's doing just fine technology-wise. No problem in "maintaining a working web browser" there.


Oh the tech in firefox is amazing, I'm just pointing out the marketshare is pretty lopsided for Chrome over Firefox.

It isn't as though there is actually a mass defection from Chrome to Firefox outside of a very small number of people.


How many people are 4.5% of the Internet though? Would you close a project because it only has a couple hundred million daily users? Would you cal it a failure?


My understanding is that they still struggle from a finance and ops perspective. Doesn't matter how good your product is if you can't keep the doors open...


Firefox has long stop being a requirement on the acceptance criteria for Web project delivery.

I keep testing it, because it has been my favourite browser all the way back to Netscape days.


Web browsers seem to be the refutation of the "if you don't like it, fork it" sentiment of OSS. If it were nearly any other project people wouldn't be strongly reiterating that using such a fork is irresponsible and dangerous. It mostly seems to be because nobody believes it's even possible for Pale Moon to keep up with the rapidly changing web standards, because not enough people care about it. All the development power is concentrated on the bleeding edge, things are moving rapidly, and this isn't solvable by asking for volunteers. Sure, all of the important code is in the open for anyone to contribute to, but it doesn't matter what your vision of the perfect open source web browser is if you can't get enough people to help work on it. Unlike 99% of other OSS projects, browsers are just too conplex, and their features are added by large committees who decide what's most important to add.

Even though the major browsers are open source, the significant developments in the web platform and the code changes necessary for them are practically impossible to implement by an average developer. They're decided on outside the general OSS community. You simply have to have a large team to be able to keep up. It feels way different than just contributing to less significant projects on GitHub with fewer maintainers. OSS doesn't imply the people maintaining things mostly move forward with things between themselves and have goals set by larger organizations outside the general community, but with browsers it can happen.

I don't think there's any real solution to this. The alternative was to deliberately limit the browser featureset while it was still small, but now all those features will never go away. And many of them provide enough benefit to both developers and users. It's not at all that adding them was a net negative if it really meant only corporations can implement them successfully. The complexity is probably an unintentional and unavoidable byproduct due to the sheer number of features.

I'm starting to view browsers as complicated operating systems in themselves, like the Linux kernel, which has its own significant developer culture and massive codebase. How much can you simplify such a thing while providing all the features people want? It's a tough question.


Web browsers became what operating systems were 20 years ago.


But with great security sandboxing by default, with multiple interoperable open-source implementations, built on open standards, and cross-platform and not bound to any specific machine architecture.

Imagine downloading dozens of different unsandboxed apps every single day on a classic operating system. You'd get so many viruses (and just broken software that breaks or slows down your system) doing that. But now we can do that safely in browsers with (web) apps. That's amazing.


> But with great security sandboxing by default, with multiple interoperable open-source implementations, built on open standards, and cross-platform and not bound to any specific machine architecture.

All that applies to operating systems and has been like that for decades.

> You'd get so many viruses (and just broken software that breaks or slows down your system) doing that.

That is simply false, and if you can prove it, there is recognition/money waiting for you.

> But now we can do that safely in browsers with (web) apps.

No, bugs are still a thing the same way they are for operating systems.


> All that applies to operating systems and has been like that for decades.

Windows has had "great security sandboxing" for "decades"? Are you sure about that?


Who has talked about Windows?

Anyway, Windows has had a proper kernel since Windows NT, 20 years ago.


What operating systems are you talking about? I'm considering popular operating systems as they're used by end users that are downloading and running applications on their own machines, and I was mostly considering ones that existed 20 years ago. I'm not thinking just of what the OS's kernel technically supports, but the full experience of how it guides users to use it. All of my points go for Windows, MacOS (with apps downloaded outside of the app store), and desktop Linux, and half of my points still apply to modern mobile operating systems (iOS and Android).

If you told an arbitrary Windows user to try a game/app you made, and you link them an EXE file, then at a surely 99%+ chance, if they run it, they're going to do it in the default unsandboxed way that exposes their files on the computer to the executable, and gives your executable the chance to hook itself in persistently on their system. It may be technically possible to safely sandbox arbitrary applications from accessing all your user files in Windows by using special tools or by creating a user account per app, but the operating system does not guide users to do that, there's a ton of gotchas, and basically no one does that. I feel pretty confident labelling that as not "great security sandboxing by default".

>>But with great security sandboxing by default, with multiple interoperable open-source implementations, built on open standards, and cross-platform and not bound to any specific machine architecture.

>All that applies to operating systems and has been like that for decades.

Android is the only OS I can think of that almost ticks all of those boxes, but it still falls short. There aren't multiple separate popular/well-supported implementations of it. Its code is open, but by "open standards" I was referring to the whole decentralized standards process the web has; in Android, Google adds, deprecates, and discontinues APIs without any outside input. In the web, browsers generally only introduce browser-specific APIs in the short term and work towards unifying their APIs with other browsers. It's super rare for anything that makes it through the standards process to ever be removed; browsers prize backwards compatibility of standardized features far more than any comparable platform. Android apps generally aren't usable on non-Android devices; if you point an arbitrary Windows user or an iPhone user at an Android app, they won't be able to run it, at least not without some special expertise.


So you are changing goal posts now into the "user experience" (or something like that).

It was never a design goal of operating systems to assume a binary that you yourself ask to run is malicious. That makes no sense at all. And that is still the case nowadays.

It is the "Web" that brought us the "amazing", truly wonderful idea of running non-signed code on the fly. And, in Web's wisdom, instead of properly implementing sandboxing and using the operating system facilities for that, browser vendors decided to do a half-assed job. Then they realized how a bad idea that was (shocking!), and they have been patching holes of all kinds since then (and reimplement everything that was already there in operating systems, too). The last bits are the WebAssembly and WebGPU wonders.

And no, I was not talking Android. At all. If you study computing history, you will realize "decentralized standards" and "several implementations" are not something that first appeared with the "Web". We are talking the 80s here. Even earlier for some stuff.

So, please, if you have not studied the past, then do not claim what you know is the beginning of everything.


How many good web "working" browsers has there ever been at one time? I submit it has pretty much always been one. That is there has always been a single browser that could browse the web really well and everything else has kind of sucked because it was poorly supported by content producers and has suffered technical limitations.

Since I've been using the web it has been: Mosaic, then Netscape Navigator then still netscape for me but a lot of the web shifted to IE, then Mozilla or ie depending on platform, then Chrome

...there hasn't failed to be competition (ie people have tried) but they have failed to get traction.

Meanwhile the complexity of the web and its underlying protocols has increased a lot, user expectations have shifted from static page viewing to running all manner of apps in the browser, and necessarily browsers have become more and more complex.


Over the years I've used Mosaic, Navigator, IE, Cyberdog, iCab, Safari, and Firefox. There were others that I never tried like the original Opera before they became another Chrome fork. They were all capable in their day, and overlapped other capable browsers. The only consistent issues I ran into were websites that assumed everyone was on Windows (I wasn't) and running IE 6.


10 years ago you could safely use Firefox (Gecko), Chrome (Webkit), Konqueror (KHTML), Opera (Presto), Internet Explorer and small browsers like Lynx.

Today only Blink and Gecko remain. And for some reason web developers want Gecko to dissapear.


Well, the browser market is, simply put, in a state of complete fucked up-ness for the past two? IE6 was released in 2001, so yes, two decades. there was a brief period of huh, thigh might finally be okay when Firefox launched, and when MS finally - literally - buried IE in 2010, but that did not last long.

That moment was never a stable equilibrium. Too few participants in the market. Too much money in it indirectly, so Google gives it away for free. This of course absolutely makes it impossible for a real market to emerge where someone pays for the browser tech itself directly.


> By some strange coincidence maintaining a working web browser became a task that only Google can do "properly". Microsoft gave up. Firefox is struggling.

In what way is Firefox struggling? And you didn't mention Apple, who seem very capable of maintaining Safari.

The "strange coincidence" phrasing makes it sound you're saying Google engaged in some kind of clandestine conspiracy. I think it's pretty clear that they wanted to make the web a more capable platform and I'm personally glad they did. Imagine if everyone was required to create a native app on Windows, Mac, iOS and Android to reach all their customers. It would be a nightmare. Google Docs alone is enough demonstration that this is something people want.


Disclosure: I maintain an open source web component library.

This is a strange take on what I think the author is fighting against: a less open web platform. What confuses me is how the adoption of Web Components will somehow make things worse than they already are. Take a look at any contemporary web framework and you'll see there's little overlap in compatibility or portability -- sometimes even between versions of the same library!

I've seen first hand how difficult web components can be, but they're still a better solution than trusting the foundations of the web to the teams at Angular or React. In my opinion we need an API that lets young developers start their web apps with plain HTML/CSS/JS without experiencing into the same decades-old issues that created frameworks in the first place. How should an intermediate developer beging organizing their CSS? Or importing helpful libraries? Or even something as simple as making a reusable HTML template without spending any time in Webpack?

The truth is that we don't have easy answers for these aspiring developers, and we won't get any sympathy from them by demanding the web return to it's document roots. I think Web Components can solve all these issues with some guidance from the community. The platform is ready and so are we.


I am inclined to agree with this comment indeed. Having had to learn a lot about these things recently (and trying to stay away from frameworks), it is quite difficult to not feel 'boxed in' by the rules imposed by React, Vue, etc. I feel as though one is encouraged to learn a framework instead of the basics. Having learned the basics, I can now see why there are frameworks, and what their uses are, but often I wonder whether people decide to do something in [x] framework to make their work sound cool, before even knowing what it is they are building.


I absolutely agree. Not only can Web Components simplify web development and lower the barrier to entry, they can also bridge the gap between frameworks and the modern modular web.

As a developer who works in an enterprise environment, frameworks like React and Angular are our boon and bane. Beyond the advertised features, they are amazing at providing repeatable patterns for developers and offer structure to large and small projects alike. And in my experience, they have proven to be significantly smaller, more performant, and more maintainable than the vanilla JS apps that were being delivered to clients previously. Also, it is because of legacy browsers that we still need all the features these libraries and frameworks provide. Many of our clients are just now moving away from IE 10 and 11, and tools like these have kept us all sane.

So we take the cost of learning frameworks, bundling through webpack, and being tied into proprietary ecosystems, because the trade off is worth it to us when our focus is delivering value to the business and functionality to our customers.

Enter Web Components. Trying to maintain branding, coherent styles, accessibility, and a cohesive user experience across an entire enterprise can be a huge undertaking. Common component libraries help, but the multiple frameworks used throughout a company can result in duplicated and triplicated work. Web Components offer us the promise of creating these assets once and including them in any application we build, regardless of framework. And since they are spec compliant and framework agnostic, changing to a new framework--or no framework--in the future doesn't have the added cost of rewriting every component to match the new lib/framework API.

To address some other comments here, it's worth discussing who controls the specs, and what's the right path for the future of the web, but we still have to develop for the world we live in today.


I don't get it. The ability to source some js and then add a <custom-element> wherever you want just like it was a built in element is one of the best things to ever happen to web design. This is what we should have had ten years ago, not the post-jquery framework madness we have now.

I do think it was dumb that <style scoped> got removed though.


Is it okay for you if you save a webpage but no contents are saved in fact? Do the test and save in HTML (with the native "Save as" feature) this page [1]. The saved page will be almost empty because it's made of web components. This is an example of issue that should be fixed.

[1] https://bugs.chromium.org/p/chromium/issues/list

Edit: This might be fixed, see my first post https://news.ycombinator.com/item?id=22604632


I don't see how this is a fundamental failure of web components and not an implementation issue of the completely ignored and disregarded save feature.


It should be very easy to make the HTML save function also store the contents of those #shadow-root elements, so I don't see much of a problem there once WebComponents support becomes stable.


YouTube uses WebComponents, and its pages are unviewable with Wayback Machine: http://web.archive.org/web/20200317150441/https://www.youtub...


Are you sure that's why? It seems possible that the issue is how the data is being loaded. How did you confirm that web components are the issue with loading that page from a saved file?


Yes I'm sure. It was a bug in SingleFile [1] and to fix it (I'm the author), I must add a JavaScript script in the saved page to deserialize the shadow root contents that are missing. It means that JS is required to view this saved page correctly.

[1] https://github.com/gildas-lormeau/SingleFile


Work is underway on speccing out declarative shadow doms, which would remove the need for that JS: https://github.com/mfreed7/declarative-shadow-dom


I mean, JavaScript being required to view pages sounds like Pandora’s already opened box.


I disagree, I develop SingleFile for 10 years. That's the first time I need to include a script in the saved page in order to display it properly. Until the existence of Web Components, JavaScript was not needed to display a saved page.

Edit: to be clear, saved pages are HTML snapshots, not full web apps stored in a page.


People save web pages?


I am not convinced. "Built in elements" are really a small part of what makes a web application, and in many cases are replaced anyway by custom solutions. In fact, html tags are now meant to only express the semantic of a portion of document, not a specific behaviour.

I'm not sure in what sense web components are different from the components of any other modern application framework, except for the fact that they can be easily imported (probably wrapped) in your framework of choice. But if you're using one of the main frameworks, there is already no shortage of components available for it. Each component will come anyway with a complex behaviour that will need to be wired up with the rest of your application, so the idea of just "source some js then add the element to the page" seems naive.

And web components come with their set of problems: more opaque, harder to style, hard to make accessible. They break the paradigm of "everything lives in the same document" that it's part of what defines the web. They can evolve at a slower pace than frameworks because they're native apis that need to go through slow standardisation processes.

It sounds like an idea from the '90s: what if we could create those damn widgets ourselves with custom tags and drop them in the html? Well, it's been possible for many years. But it is only useful if you do it in the context of an application framework which already gives you the same feature, with better tooling.


> The ability to source some js and then add a <custom-element> wherever you want just like it was a built in element is one of the best things to ever happen to web design.

STOP MAKING YOUR PAGES SO SLOW!!


sourcing timezonepicker.js and using <tzpicker> is making our pages slow??


Yes, it does make it slow, and reduces user customizability. I suggested a way that can make it fast and increases user customizability, as well as improving security, and improves backward compatibility, too. Yet, it would seem, they don't like that. (Although, I don't know if the authors of Pale Moon will consider my idea; probably not, but it may be worth a try.)


Have you got any stats to back that up? I've been playing around with it and there is nothing from the profiler in Chrome or Firefox that suggests it might be slow.


Nope. If I use Youtube as a smoke signal for what an authoritative web components-based implementation should look/feel like, then it's definitely not in my 'ready-to-adopt' category. Youtube has significantly regressed as of late, with strange bugs/quirks throughout.

I'll stick to frameworks that emit standard HTML thanks.


> The ability to source some js and then add a <custom-element> wherever you want just like it was a built in element is one of the best things to ever happen to web design.

Web design is the worst thing that ever happened to the web.


Yeah, screw usable UIs! Bring back marquee tags!


To play devil's advocate, note that Google is trying to propose something to make the shadow root contents deserializable without relying on JavaScript [1][2].

[1] https://github.com/mfreed7/declarative-shadow-dom/blob/maste...

[2] https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...


I hope [1] takes off, that's a nice enhancement idea.


Ironically, that very page handles link clicks with JS without bubbling back to the browser, so since my browser is set to forbid opening new tabs on a webcoder's whim, I also can't cmd-click links on that page to open a new tab when I want it. And the page does open new tabs on simple clicks if it's allowed in the browser. Such simplicity and accessibility, wow.

I also have no idea from that post what particular problems the authors see with WebComponents and Shadow DOM and why those would be Chrome-only.


Given they are based on a fork of Mozilla's Gecko engine I think the real reason behind this post is that their inability to cherry-pick Web Components from Gecko.

It's likely the compatibility cost is making the browser unusable on many websites which is rendering their browser useless over time.


I mean, with the dawn of webassembly, I feel as though things are going to get closed off more and more. We've had this with flash and java applets too. Ultimately I hope an open internet will prevail. Otherwise we will not be able to archive things effectively. Don't get me started on accessibility also.

While I'll happily jump on the let's-bash-google-train, is there any effort to do the same thing in a non-scoped way?

I feel a set of 'web-components' would in fact be a super useful thing to have for everyone. Why does it have to be google-chrome / whatever-specific and locked down?


> is there any effort to do the same thing in a non-scoped way?

It's already done! Web Components (Custom Elements + Shadow DOM) are web standards, ratified by W3C, and they already work great in Firefox and Safari. This whole article is simply incorrect.

http://w3c.github.io/webcomponents/spec/custom/ https://w3c.github.io/webcomponents/spec/shadow/

https://www.webcomponents.org/ see "Browser Support"


I would note that "ratified by W3C" does not mean much nowadays. WHATWG won that war and the W3C now essentially rubber stamps whatever WHATWG decides.


Only for HTML, not for CSS & JS. JS is handled by TC39 and CSS is still at W3C.


What happened there? I am not in the loop on this, so wouldn't mind getting some more details.


When W3C's XHTML 2.0 stalled WHATWG created a new "living standard" (which is subject to constant modifications and additions) to continue HTML development. W3C attempted to create fixed standards from the document (e.g. HTML 5.1, HTML 5.2, etc) that only included things that were widely supported and that met standards for accessibility and so on. However, due to the constant churn of WHATWG's document this was difficult and WHATWG were strongly against having what they saw as competing standards.

Therefore there was some conflict between the two groups until WHATWG emerged victorious. Here's the final agreement: https://www.w3.org/2019/04/WHATWG-W3C-MOU.html

It boils down to complete capitulation by W3C.


Interesting stuff! Thanks for sharing :)


They... don't work great.

They cannot be serialized, they break accessibility, they break screen readers (AFAIR), they are not lazy loaded, they do not participate in form events, they...


Screen readers and accessibility software have never worked well* with arbitrary applications, and that’s what sites using the shadow dom aim to be.

Screen readers especially only do well on simple documents, and only work with apps well they’re specifically designed to interface with which for the most part are apps that are used in document creation or text communication like Word, or Skype.

*Disclaimer: “well” is in the eye of the beholder. It’s challenging to use them with arbitrary apps, and the time investment is only useful if that app is going to put food into your table.


They require JS, which is definitely a drawback.

But they work fine in screen readers, (you can screw this up with JS, just like any other HTML) and they "participate" in form events just like other HTML.

Since they require JS, you can lazy load them if you want (it's just JS).


I don't know the state of it today, but even in summer last year custom components with Shadow DOM were still excluded from Form Submission. Form-associated custom elements proposal was enabled in Chrome, no idea if Safari or Firefox went and implemented it, too.

Shadow DOM also throws off accessibility IIRC if aria-labeledby and similar cross shadow dom boundaries [1] but I do agree they should be largely accessible to assistive technologies.

You can't truly lazy-load them. See Rich Harris' (author of Svelte) take on Web Components [2] And these two articles by a Web Components proponent listing the many issues they still have [3]

[1] See for example this tweet https://twitter.com/sarahmei/status/1198069119897047041

[2] https://dev.to/richharris/why-i-don-t-use-web-components-2ci...

[3] https://dev.to/webpadawan/beyond-the-polyfills-how-web-compo... and https://dev.to/webpadawan/the-journey-of-web-components-wron...


> I mean, with the dawn of webassembly, I feel as though things are going to get closed off more and more.

This is already possible today - many Qt applications, for example, compile to WebAssembly with minimal changes required.

Look at this nice Qt rich text editor demo, rendered in a canvas: https://s3.eu-west-2.amazonaws.com/wasm-qt-examples/last/ind...

Even copy&paste works! But except for use cases like porting existing niche enterprise applications, I don't see why anyone would prefer this over standard web development.

Performance is worse, it's very slow compared to the DOM, accessibility is nil, it requires arcane tooling and the ecosystem is so much smaller.


And that the very simplest examples have to download 12MB of code before they can do anything. (OK, so it’s probably more like a 4.5MB download. Still waaay bigger than you should require for most purposes.)

And you don’t seem to be able to enter astral plane characters at all (though that’s just an implementation bug that they haven’t dealt with yet, and shouldn’t be a fundamental problem of the approach).

And controls are all wonky and non-native in look or feel. And keyboard control slips badly in many places so that you lose your place within the window easily.

Recompiling desktop apps to the web (and specifically Qt, but also excluding games, which live somewhat in a space of their own) is very niche.


It doesn't work that way. Technology almost always can be used for more than one purpose. WebAssembly will grow the web a lot. Sure, it might replace some existing parts, but that's not because WebAssembly is "evil" or "closed", but because some publishers/sites/folks want to go into that direction. They are probably already doing a lot of "closed" stuff. DRM? IP restriction? Continuous connection checking or the site stops working? Idiotic session timer? Restricting registration to certain users for no real legal reason at all? Obfuscated HTTP API? The list goes on.

Worrying about dumb technology is irrelevant. (If you want to worry about tech, worry about AI, that has the potential to really cause problems.)


WebAssembly is not more closed of as normal minnified JavaScript.

Sure you could combine WebAssembly with some DRM thinks to close it off. But so can you do for normal JavaScript.

At the same time WebAssembly does open the web for more programming languages, more programs/use-cases (due to more close to hardware performance).

Lastly WebAssembly turned out to be very usefull for having a standartized cross platform, cross architecture sand-boxing system. I.e. what Java tried but failed to do.


Web Assembly changes nothing to the issues described in the post. Web Assembly still relies on WebAPI in the browser.


Google essentially has the same role Microsoft did at the height of the browser wars...majority market share and lazy developers. So new shiny things tend to be Chrome-only.

Webcomponents exist as a loose set of standards that are marginally cross-browser, but Google has the biggest/most popular framework using web components.


The big difference is that Google is building well-specified new web standards which are quickly adopted by other browser vendors, rather than adding a heap of poor proprietary browser extensions that become de-facto standards.

They even deprecated Chrome Apps in favor of the PWA standard. Google has nothing to gain from hurting the open web ecosystem.


That's true, however the new standards are often poorly designed and require Google level resources to keep up with the pace of change.

They've basically ensured that another Netscape/Chrome cannot happen, they've pushed the complexity of the web so quickly that only the biggest most well funded companies can afford to keep up.


Browsers are now about as complicated as operating systems were in the 1980s. It’s not impossible to create a new one, but it’s certainly not low hanging fruit anymore.


"Google has nothing to gain from hurting the open web ecosystem."

I really want to believe this. But how does AMP fit into this? I feel it is quite counter to the idea of an open web. Please do not take this as an attack or criticism, I would like to, in fact, learn some better arguments in favour of why Google would support, rather than suppress an open web ecosystem.


From the perspective of a service connecting eyeballs with the thing they're looking for (and some ads along the way), AMP is a relatively anodyne amplification on top of caching strategies. "Hey, build good, easily parsed and managed pages and we'll make everything Way Way Faster" is the argument.

To be clear, I'm not saying that it is necessarily representative of reality, but that's the idea.


Interesting. It would appear I have been downvoted by someone for asking a question, which is a strange thing to do. Anyway, I thank you for your comment. I now feel I understand the matter / intent behind AMP a little better.


Depends on what you mean by "an open web". If you consider "an open web" to be a publicly accessible network of documents and applications, it's practically absurd to suggest that Google - a company that makes almost all of its money from services that need to crawl everything it can find on the public internet - might want to hurt this. The more websites and applications that join the open web ecosystem and open themselves to public crawling and searching, the better it is for Google.

Walled gardens like Apple News and Facebook Instant actually run counter to this open web. They can't be publicly indexed or crawled or searched, yet they have the valuable advantage of being really fast. I see AMP as an aggressive optimization designed to bring web performance up to par with native platforms like Apple News and Facebook Instant. When a publisher chooses AMP instead of exclusively using those closed platforms, it's a win for the open web when their content uses web standards (AMP is merely a strict subset of web standards) accessible by anyone, not just Apple or Facebook users.

I think nowadays "the open web ecosystem" has come to mean something different in discussions like this, closer in meaning to "the distributed web" or "a less corporate web". Google obviously doesn't care for that.


AMP is a federated web alternative to closed apps and ecosystems like Apple News and FB Instant Articles. It's a specification built using existing web standards that allows anybody to build FB IA or Apple News without integrating directly with publishers.


They're building some, but also killing others, like the aforementioned <style scoped>.


This is such a common misconception...

Internet Explorer was terrible because they would drop some binary blob (ActiveX, Flash Player) into the interpreter. And those binaries were neither standardised, nor cross-platform, nor OSS.

None of that is true for Chrome. Google's additions tend be useful, which is why "too fast" and "bloated" is the only (generic and subjective) criticism people can come up with. With Safari on Mac and especially iOS, Google also doesn't quite own the same position that MS did. Nor do they have any incentives to harm other browsers: their competitors are Facebook and iOS native apps, i. e. the walled gardens that can't be crawled and searched.


ActiveX was never as big an issue as weird CSS rendering and JScript weirdness. Even VBScript was a minimal issue.

In some ways, it's for the best that Microsoft basically stopped development in the days of IE6, as it didn't lead to the exponential feature bloat that has been the case the last few years.


There were far more issues with IE than just ActiveX and Flash (especially since Flash was also available for all other browsers). It was the difference in how it handled HTML/CSS/JS, compared to other browsers, that was the biggest pain point by far.


WebComponents might make sense for complex Applications as opposed to web sites.

Not everything done on the web is a "web site". Or blog.

There are untold numbers of Boring Business Applications that make the world go around. (like it or not...) These applications are increasingly built to run in web browsers. (not necessarily over the public internet)

These types of applications may be large and complex. They might have hundreds of forms, thousands of reports, etc.

Web Components, and other various web technologies might be great for this type of use. This isn't a web site where you want to "save" a "page". The application isn't organized as "pages". The browser is more like a remote smart terminal with a rich set of technologies available.

Next time you check out a library book, or a nurse draws your blood for lab work, or you get your oil changed, look at the application they are using. Is it running in a web browser?


They're also great for blogs too.

Remember how everyone was adding stuff like lightboxes to their pages with jQuery Plugins? Well now that kind of thing is much cleaner and doesn't require writing a line of JS.


Please, it's not "Google Web Components", just "Web Components". Web components has pretty decent support in Safari and Firefox. But of course, the real problem is that that is just about the full list isn't it, not considering other Chromium-based ones, and nobody really wants to reinvent the wheel.


Interesting points here, especially about `<style scoped>` being removed from browsers after Google requested it. That would be really handy to use. https://github.com/whatwg/html/issues/552


I (a mere dabbler in web stuff) for one liked <style scoped>, but saying it was killed because Google alone refused to implement it is disingenuous. Just look at https://caniuse.com/#feat=style-scoped And here, style scoped experiment in Chromium in 2012: https://developers.google.com/web/updates/2012/03/A-New-Expe...


But even Mozilla is having problems implementing it. Bugzilla has more than 100 open bugs about WebComponents.


>> WebComponents used "in full" (i.e. dynamically) inherently creates complex web page structures

That ship sailed a very long time ago.

WebComponents is dead simple compared to React, TypeScript, WebPack and all the complex tooling which developers are essentially forced to use these days due to a lack of alternatives.


> forced to use these days due to a lack of alternatives.

The alternatives were barebones HTML, CSS, and javascript.

They were forced, in 2013, by Google who used their monopoly search position to enforce 'responsive' design (i.e. frameworks) otherwise your page got delisted. This company is ripe for anti-trust.


(aside: I haven't seen a PHPBB forum in aggges! Amazing to know they're still going)


to be dismissed without even a bare minimum of consideration. Palemoon is a waste of time and it makes me sad there's people out there thinking otherwise.


There are plenty of reasons to not like web components: https://dev.to/richharris/why-i-don-t-use-web-components-2ci...


The motivations (and problems) of mega-corps rarely align with the motivations (and problems) of human persons and it's always a mistake for everyone to copy their practices blindly.

You have to decide what you want the web to be. An opaque application or a document. Using the tools of applications for documents is pretty stupid even if it is hip and looks good on a resume. There are two webs. There's the money web run by coporate persons and then there's the web run by human persons. If you're being paid you don't really have a choice and that sucks. But if you do have a choice, please please, consider what Moonchild is asking here.


In case someone else has no idea what Palemoon is:

Pale Moon is an open-source web browser with an emphasis on customizability; its motto is "Your browser, Your way". There are official releases for Microsoft Windows and Linux, an unofficial build for Mac OS, and contributed builds for various platforms. Pale Moon is a fork of Firefox with substantial divergence.

https://www.palemoon.org/


All that drama would not have happened with XHTML 2.0 spec which solved many of the problems WebComponents tries to solve (in a worse way). And don't get me started on ES4 vs Typescript. A lot of energy wasted for things that could have been solved once and for all 10 years ago, but were not because browser vendors and big web corporations favored the sloppiness of the status quo back then.


Some features of the Web Components are stuff I thought should be available to the end user only, not for the web developer. This includes shadow DOM, and some of my own ideas such as "privileged CSS" and "meta-CSS" (including the ability to set more fine priorities). I also thought of the <widget> element, which provides better backward compatibility (it acts like a <span> if it is not recognized), better user customization, faster loading, and potentially ability to work even if JavaScripts (and WebAssembly) are disabled or are implemented at all. This would be better than Google WebComponents, I think.

(I also don't like that they removed <blink> but did not remove <marquee>; I don't want <marquee>, but I do want <blink>, which I have done with userChrome.css, which also allows me to program the blink rate, is good.)


I know this seems like its a silly question, but why are they calling it _Google_ web components?

Aren't web components now apart of the main spec via W3C acceptance?


Sites that use WebComponents are sites that I will be unable to use. Not that whether or not I can use a site matters to web devs, of course.


People are already not adopting Web Components ( ͡° ͜ʖ ͡°)


To fix the overly long lines:

    document.querySelector('.content').style['max-width'] = '650px'


Or toggle reader view in your browser


As a web developer for over 15 years, I can safely say that bad decisions will continue to be made. Such as, CSS-in-JS. Expect the worst. And it's not completely Google's fault although they should never have bothered to invent WebComponents, this sort of thing comes from the hubris of CTOs and technical leads who are the ones pushing these various technologies, and the acolytes who desire these positions following suit. Believe it or not, some people like to use certain tech to satisfy their own personal itches and not because the tech is actually demonstrably better.


Interesting topic about possible death of Pale Moon because of that specification and their inability to adopt it:

https://www.reddit.com/r/palemoon/comments/fk4fnl/attention_...

Who knows the Palemoon team realizes pretty fast that when the developer is writing something like that here - https://forum.palemoon.org/viewtopic.php?f=1&t=24004

that it has the meaning that they can't move forward or have serious issues in attempting to do so. Google-Web-components... The next BIG thing in web development which is used in Youtube and other big pages will be most likely Palemoons undoing. Without it - no more videos on Youtube! Media and business sites are no longer loaded and can't be used properly, buttons can't be pressed anymore so pages are partly or fully broken! This is what more and more pages adopt, as it is the NEXT BIG THING! The Palemoon team failed in the past to implement highly complex features and the only solution so far was in the end to adopt fresh engines from Mozilla on a regular base, something which is NO LONGER POSSIBLE as future Firefox engines have not the features the Palemoon team sees as vital! Get slowly ready to start the search for a new browser to which you can switch in the near future - the best and most logical time in doing so is NOW!

Additionally - regarding this whole topic: https://freenode.logbot.info/binaryoutcast/20200314

WHY it most likely will die... While we all should be grateful for Gaming4JC and his attempts to bring over some code lately - Google Web components is TOO BIG - Seamonkey guys don't have it inside their new new 2.53 - which is based on Firefox 56. Waterfox - using the same code - also has issues. And if it is already problematic with Firefox 56 based code... how big is the chance to succeed with Firefox 52 based code? With the help of only one skilled guy? My suggestion is try as much as other possible browsers now to make the switch when it is forced on you as less annoying as possible.


> Google-Web-components... The next BIG thing in web development which is used in YouTube...

> ...failed in the past to implement highly complex features...

This must the most toxic web community I've seen in a while. I basically agree with this troll but they still sound like some imposter failing miserably at faking confidence and competence. (Or, you know, the current leader of the free world).


Before voting on this post, please note that the body of it is a copy/paste of the linked Reddit post and does not appear to be newsletterguy's personal opinion.


I used to be big into XHTML, but that ship has long sailed.

Also if Web Development is now becoming a synonym to "ChromeOS", Chrome did not reach that state magically, rather thanks to the crowd that kept bashing IE and FF thorough the last decade.


    > I used to be big into XHTML
I know :(

    > rather thanks to the crowd that kept bashing IE and FF thorough the last decade.
I like FF, but that's too simplified.

Chrome was preinstalled on a bunch of devices (namely Android) and heavily advertised (through Google). Firefox and even IE is having an uphill battle fighting that kind of power.

What governments need to do is do to Google what they did to Microsoft. Lawsuit, hefty fine and mandatory choose your own browser. Although, I doubt even that would help.


Sure it is simplified, but for example on my current project, the customer does not require FF or Edge compliance, yet I take the effort to also test our application on them.


Thanks for doing your part :)

Now, if only someone could stop Firefox managers from killing off the only good part of Firefox (it's diverse addon system).


Ever since the XUL add-ons system was shut down, they've only added new features to webextensions. It's either already dead, or it's not being killed off, depending on your POV.


I just wish ScrapBook worked like before :(

So many extensions lost, like tears in the rain.


> Chrome was preinstalled on a bunch of devices (namely Android)

Linux comes with Firefox pre-installed (or some variant of it, depending on distro).

Windows comes with Edge pre-installed.

MacOS comes with Safari pre-installed.

OSX comes with Safari Mobile pre-installed.

Android devices appear to have a decent choice of browsers, and the default is chosen by the manufacturer. For a lot of them that's Chrome, but not all.

I don't think there's actually any large group of devices that has Chrome pre-installed.

Also, pure Android users are free to install any browser and set it as the default.

Chrome is so popular because it has been the best browser for years. Whether that's still true is open for debate. But I remember when it came out, finally an answer to the godawful shite that was IE. We may dislike it now, but it saved us from Microsoft's disaster. It's fascinating that you're now claiming that "even IE is having an uphill battle fighting that kind of power". How the tables have turned.

The problem is not so much with marketing power, but with coding investment. Writing a browser is a mammoth undertaking, with all the strange edge cases and standards. Even Microsoft has decided it doesn't see the point in writing one. We're going to be left with two browser choices not because of corporate power, but because it's just too much effort for not enough reward to write a third.

I see where TFA is coming from: if we could agree that browsers don't need to be this complex, then there would be more of them. But that would mean less flexible web pages, and we'd end up with something Flash-like being an attractive alternative.


> Chrome is so popular because it has been the best browser for years. Whether that's still true is open for debate. But I remember when it came out, finally an answer to the godawful shite that was IE. We may dislike it now, but it saved us from Microsoft's disaster.

That wasn't Chrome at all. It was Firefox. Where is this history re-write coming from. Firefox lost market-share when they went to Australis and became followers of Chrome instead of leaders (helped in part no doubt by funding from Google).


No, Firefox was off lost in the weeds and being useless at the time. I remember it well. Chrome was necessary to stop IE, utter piece of crap that it was, from dominating the world.


No. That's not it. IE was losing market share, Firefox was gaining market share slowly. Then after success of WebKit in iOS, Chrome was forked and rose exponentially.


>Linux comes with Firefox pre-installed (or some variant of it, depending on distro).

"Linux" has a minuscule market share and the people that use it are already more likely to use Firefox than Chrome since FF is open source.

>Windows comes with Edge pre-installed.

Edge is now Chromium-based as not even Microsoft wants to deal with Google breaking websites for non-Chromium browsers "by mistake" and rushing to fix it.

>MacOS comes with Safari pre-installed.

>OSX comes with Safari Mobile pre-installed.

Which are minuscule portions of the market.

>Android devices appear to have a decent choice of browsers, and the default is chosen by the manufacturer. For a lot of them that's Chrome, but not all.

Irrelevant. All of Android's browsers use a wrapper around WebView which is Chromium. Only Firefox comes to mind as a non-Chromium browser and it's not preinstalled on anything.

>I don't think there's actually any large group of devices that has Chrome pre-installed.

Android phones, Microsoft Windows and every user that installed Flash or Avast! in the past 12 years.

>Also, pure Android users are free to install any browser and set it as the default.

Again, most browsers are a wrapper around Chromium. Again, most users don't change defaults.

>Chrome is so popular because it has been the best browser for years.

And anti-competitive practices like being bundled with popular software and having free advertisement from the most popular website in the world (which also happens to be the most popular email provider and video sharing platform.)

>I see where TFA is coming from: if we could agree that browsers don't need to be this complex, then there would be more of them. But that would mean less flexible web pages, and we'd end up with something Flash-like being an attractive alternative.

Websites are not flexible. They're flexible for ad companies and web designers who like pointless animations, but now they're harder to parse, modify, save and their compatibility with text-to-speech has gone down the drain the past years.

The problem with Flash is that it gave us a bunch of unresponsive, needlessly flashy websites that were harder to use. Web designers are doing the exact same thing with HTML5 and their 50 MB of JS libraries.


> Linux comes with Firefox pre-installed (or some variant of it, depending on distro).

A number of distributions ship with GNOME Web.


Chrome needs to be split from Google.


How would it pay for developers then?


The way Firefox does.


Like other ever other commercial browser?


What do you mean by commercial browsers?

Edge is backed by MS, Safari by Apple, so they are out. Firefox is a backed by a foundation, a not for profit. Brave is on top of chrome, and open source.

I guess the old Opera browser ran ads in the browser and was a commercial browser. And did people pay for netscape navigator?


I only use chrome for sites that don't work in other browsers and for testing. I think it's great that IE is no longer the defacto standard, but I'm not a fan of chrome being it either.


IE and FF could've just kept up... Apple is doing decently at least.


It is not that some devs use Chrome only features, the other browsers are to blame for not keeping up, I see.


[flagged]


Proposition 8 was an effort to overturn gay marriage, to remove a right that gay people had possessed ever since the California Supreme Court overturned the ban 6 months prior.

If I was a gay Mozilla employee, and I knew that my CEO was not just against equality, but actively working to remove my rights, I'd be livid.


> FF was best when it was called Phoenix, from there it was downhill

WTF? Firefox 3 and above altered the browser landscape, providing a viable alternative for people using IE6.

> instead of focusing on their core product, Mozilla decided to become a sjw organisation

Yeah, I’m sure all engineers suddenly stopped work on the browser because they care about gay rights. Makes sense.


Google is trying substitute the OS with their browser. The rest is accordingly to this goal. They'll be exposing more and more of underlying OS functionality on one hand and turning Chrome to be a fully fledged IDE as well. Web and at some point traditional app developers will play along as well since having one platform to develop for is very attractive idea. Web components is just on of many other steps towards this goal.

So yes I'm afraid that at some point we will all end up being servants of Google. Hopefully something will prevent this from happening.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: