> seems to compare the Firefox version number with 65 ... I’ve no idea what the purpose of this is
I previously worked on JS infra at Google. Google uses a lot of shared JS libraries that date back over a decade, and those libraries have accumulated lots of workarounds for browsers that nobody cares about anymore. (One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!)
It's very difficult to remove an old workaround because
1. it's hard to be completely sure nobody actually depends on the workaround (especially given the wide variety of apps and environments Google supports -- Firefox aside, Google JS runs on a lot of abandonware TVs), and
2. it's hard to prioritize doing such work, because it's low value (a few less bytes of JS) and nonzero risk (see point 1) without meaningfully moving any metrics you care about.
In all, how to eliminate accumulated cruft like is a fascinating problem to me in that I can't see how it ever gets done. And it's not a Google thing. Even the newer cooler startup I now work at has similar "work around old Safari bug, not sure if it's safe to remove" codepaths that I can imagine will stick around forever, for similar reasons.
The longer I work in software, the more I see parallels to DNA. Yeah, sure this parasitic chunk of blueprint seems like it doesn't do anything, but it's not harming survival of the species, and hey, maybe it will serve some important purpose to the propagation of the larger supporting organism somewhere in the long long tail of probability.
I've also adopted this truism, to help me away from pre-mature refactorings that I am prone to do: Clean code will either be discarded, or it will live long enough to get accidentally mutated into a mess that nobody wants to touch.
My biologists friends on twitter were all up in the air about a "Vault Organelle"[0] the other day. Knocking out this organelle doesn't seem to do anything, but everyone seems to agree it must have some corner case function to still be around.
The vault is not a new discovery, but since it doesn't exist in yeast or fruit flies it has somehow not gotten a lot of attention...
Hah! I had the same thought. I proposed that after Y2K we shift all the old COBOL programmers to decoding the human genome, as 30-year-old code bases are the closest thing we have.
In a few years I should make the same proposal but for all the Enterprise Java developers.
Python 3 was an attempt to remove cruft but it was drawn out and somewhat painful.
Apple on the other hand has managed to reinvent itself successfully several times by controlling its vertical tightly, and each time it does it there’s a natural shedding of cruft. I remember every single architectural move — 68k to PPC to Intel and now to ARM, executed impeccably each time. Moving a developer ecosystem with the times is truly one of the harder things but Apple seems to have managed to pull it off each time.
> Apple on the other hand has managed to reinvent itself successfully several times by controlling its vertical tightly, and each time it does it there’s a natural shedding of cruft. I remember every single architectural move — 68k to PPC to Intel and now to ARM, executed impeccably each time.
Wasn't there a recent Apple zero day due to some 56K modem code that wasn't coded correctly and had stuck around until now?
NeXT was, but Carbon and its remnants were not produced at NeXT. Carbon is long gone, but those remnants still exist in iTunes/the various apps split off from it.
And being multiplatform doesn’t suggest the absence of platform-specific code. I’ve hypothesized recently (to some chagrin) that Apple probably still maintains a skunkworks version of macOS for PPC, as insurance. It would be silly if they didn’t, given the history. So, probably yeah there’s a bunch of PPC code in macOS, but I’d bet it’s generally quite identifiable.
I don’t know why there would be a “the” contingency, but we know they’re also hiring for active RISC-V engineers. It’s definitely not obvious the field is as limited as that.
It’s not strong on embedded, mobile or desktop, debunked by x86 and now ARM on super computer, cloud and enterprise. If it’s not dead, it’s on life support.
ARM wasn’t strong on many of the platforms it’s now running and will be soon. Apple has historically backed weak hardware platforms both to a fault and to astonishing success. Part of the way they did that was maintaining cross ISA builds internally for platforms no one would bet on.
I bet you there isn’t. It would have to be emulated, which would be too easy to spot. And there’s also no need. They’ve been using a much higher level tool chain for decades. There’s plenty of legacy code, sure, but no PPC. Rest assured.
Maybe we’re not taking about the same thing. You’re saying there’s PPC code running on current macOS. If it’s running on recent hardware, it’s running emulated, since Apple hasn’t shipped a PPC machine in more than a decade.
I’m saying the exact opposite: that there’s likely PPC source code in macOS still maintained just in case. I really doubt all of the Carbon remnants are ISA specific, the point of bringing that up was that macOS’s roots are not entirely NeXT and things that still exist are based on APIs largely from classic Mac OS.
I really doubt they are maintaining a PPC fork. It's not a trivial effort and it would be hard to justify the investment and even harder to motivate the talent needed.
ISA specific code is restricted to kernel and drivers. What's left of Carbon has been through 3 transitions (PPC to x86, x86 to x64 only, x64 to ARM). It's ISA clean all right.
In case of web apps/sites you can easily find truly dead code by inserting instrumentation/reporting in all the old workarounds and collecting it over some reasonable period of time.
Nature is experimenting all the time though. Damage to DNA is random and stable and mutable parts of the DNA have an equal chance to be impacted. Beningn mutations accumulate and non-viable ones don't persist for very long. Also, accidents during cell division can cause parts of the genome to be lost.
Sometimes more extreme changes happen during cell division when genomes are duplicated. Retroviruses that manage to reach the germ line can also have a huge impact. In these cases, the new genetic material can take over entirely new functions.
Human evolution is pretty slow because of a generation time of 20 to 30 years, but in more short-lived species such as fruit flies much more interesting things can be observed. Some of these indeed remind me of software development.
> In all, how to eliminate accumulated cruft like is a fascinating problem to me in that I can't see how it ever gets done. And it's not a Google thing. Even the newer cooler startup I now work at has similar "work around old Safari bug, not sure if it's safe to remove" codepaths that I can imagine will stick around forever, for similar reasons.
I think this is because it's not purely a technology problem. Especially if you have enterprise customers, the real question you are asking is: by removing support for these specific browsers, are you breaking someone's important workflow in a way that lacks viable IT workarounds? Because if you are, the backpressure via business channels will be what forces you to keep the old cruft in the code.
In a previous job, I had to explicitly keep tabs on certain customers' IT policies w.r.t. browsers because that would ultimately inform our browser support matrix, and because it's enterprise, the actual browser versions could lag by 5 years or more. And when a single enterprise user is stuck on IE9 but the account is worth tens of thousands of dollars to your nascent startup, starting a fight with their IT department is one of the last things you want to risk customer goodwill on.
That's why I was goddamn ecstatic about Microsoft's move to Edge, because it meant historically stuck-on-Trident businesses now had a path to supporting more up-to-date browser tech on a much faster cadence.
> One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!
My oh my, that takes me back to the days of quirks mode and early versions of Internet Explorer. I made a good living through college helping design and ad agencies backport stuff to IE5/6/7 and became intimately familiar with the lack of event bubbling support and fan favorites like:
1. IE displaying a blank page with no errors whenever a CSS file of more than 4kb was loaded
2. Resolving CSS rendering issues between IE5/6/7 due to differing rendering strategies for the CSS box model
3. Debugging javascript in IE back when dev tools didn't exist.
For all people harp on the current state of the web, we have come a long, long way.
> and nonzero risk (see point 1) without meaningfully moving any metrics you care about.
In all, how to eliminate accumulated cruft
Ironically, the other side which is Chrome can be quite blasé about this. See the recent alert/confirm/prompt kerfuffle https://dev.to/richharris/stay-alert-d
> One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!
MSIE only supported bubbling, Netscape 4 only supported capturing. The DOM Level 2 model (which combined both) started getting supported around IE5 / NS6 / Mozilla, though support would remain spotty for a while especially on the IE side.
Microsoft's event model also worked off of a global (`window.event`) for the event information rather than a callback parameter which was fun.
And there was no "current target" passed to the callback (or available on `window.event`), which meant you had to keep a handle on the event target from your callback, which caused a cycle between the DOM and Javascript, which created a memory leak by default: IE's DOM was a COM API, while JScript was its own separate non-com runtime. By creating a cycle between the two, the JS handle would keep a COM refcount > 1 on the page's DOM, and the event target would (through the event handler) keep a foreign JScript handle, and then neither could be collected.
And because IE's COM instance was per-process the leak would live until you closed IE itself, every reload of the page would create a new document in COM, an event handler in JScript, and leak the entire thing. You had to explicitly break the cycle by hand by detaching your events during e.g. onunload.
In fact there was a tool called "drip" whose sole purpose was to open a page in an HTML control, then loop reloading the page and counting COM objects. If the number increased, you had a leak (almost certainly due to an undetached handler), and now… you had to find where it was.
edit: and the state of tooling at the time was… dismal is too kind a word. This was before Firebug and console APIs, so your tools were either popping `alert` for debugging (this was also before JSON support in browser so serializing objects for printing was neat) or installing a debugger, which were universally shit and only handled *javascript debugging:
Mozilla had Venkman, which was dog-slow and had a weirdly busy UI.
Microsoft had either the Script Debugger or Visual Studio's debugger. SD was extremely brittle and would crash if you looked at it funny while VS was heavyweight and expensive. SD was also completely unable to debug scripts at the toplevel (of JS files or <script> tags), it only worked inside functions. I don't think VS supported that either but I don't remember as clearly so it might have. The best part of that was that you could set breakpoints at the toplevel, they just wouldn't ever trigger.
I totally forgot all the time I used to invest back in the day in working around those IE quirks. At one point I drew stats and figured out we spent a quarter of our frontend development budget on IE compatibility. Thanks for the memories.
It’s kind of believable. Not to sound all old, but before we had preprocessors and build tools, we memorized all the quirks and hacks. It was common to spend a lot of effort on compat, but most of it for weird edge cases that weren’t well known, or for being ambitious about moving the web forward before it was reliable to do.
This isn't really a "state of the web these days" complaint, it's what happens any time you ship cross-platform code. Look at any reasonably mature C/C++ codebase and you'll see plenty of '#ifdef _linux...#ifdef _OpenBSD...'.
Having complete control over the environment your code runs in is the exception, not the rule, though the modern trend for SaaS might make you think that backend code looks cleaner than the frontend.
As a person who enjoys doing this kind of cleanups, it is disappointing. But when I think of the larger business perspective, these kinds of cleanups are difficult to justify, in that the value of them is hard to quantify.
Overall the net effect is a kind of death by a million cuts, but each of these cuts individually is a decent amount of work to clean up and fix that doesn't itself move any needles.
My latest perspective on this is that the only renewing force is the also the one found in nature, which is that you occasionally need to burn the whole forest down or have the organism die, so that you can start again afresh. In a business setting this means using a new stack in a new app.
The trouble with a lot of this stuff is that it happened in the distant past when the end state wasn’t clear; hindsight says you should have labelled the hack with the conditions that required it, but that wasn’t at all obvious then: you often didn’t know what might change to make your hack unnecessary or counterproductive.
Nowadays, browser development is fairly principled so that you can express things like that—or better still, polyfill—but in the days of long ago you didn’t know who or what was going to win, and where browsers deviated it wasn’t a matter of comparing behaviour to spec and saying “this one is wrong” (which is normally how things will go these days), but rather… well, they’d probably keep on being different for a while, but maybe at some point one would cave and change to be more like the other, or maybe they’d change to something else altogether.
And because many or most situations were like this, people never got into the habit of annotating such things even in cases where it was possible (e.g. once addEventListener had won, any attachEvent usage because you supported old IE should have been marked accordingly as unnecessary once IE9 was the baseline).
They were the days of the wild west when frontend web development was far more ad hoc than principled.
> Put a big comment on it that says "HACK: Remove when X is no longer the case"
You know, every now and then I come across some comment like that from 6 years ago that is clearly no longer applicable and I go and remove the hack... feels really good, but I don't think anyone will just go around looking for these trying to get rid of hacks as there's ALWAYS more important things to do... the hack will probably remain mostly harmless there for decades to come :D so why would we spend time on that other than by happenstance, like I've just mentioned?!
I mean it's really satisfying to do, and if the originator did make the thing easy to remove, then it doesn't take away much time from other priorities
Reminds of a previous job where some legacy code had checks for Netscape 4. The code was written in 2003 and was still running in 2016, maybe 2017, but not any more.
That's a symptom of companies spending all their money on building new features without also spending money on regression testing.
If you build a feature to work around a third party then surely you have that third party on-hand to build automated tests against and ensure that something didn't break and/or is still needed? If not then you're not writing new features; you're writing hacks. And you are perpetuating the same problem on other people.
Not always true. Especially for something running on client devices. You can't expect an app maker to own every possible device their code is going to run on.
Now maybe Google could do something like that, but 99% of people couldn't.
There's also, in this case, a question of if anyone is still using the device/browser/whatever. It sounds like they know removing the workaround will in fact break that use case, they just don't know if they should care or not.
The third party in this case might be Grandpa Joe. You can't exactly ask him to use the beta version of X and see if it's broken. Or if he's still on Firefox vOld.ancent.
It used to be that when someone got a fancy new device, like an iPhone you just said: - If you want me to fix the issues, send me an device.
Now alot can be emulated. I can for example start Xcode simulator and pick the device someone have issues with. Or I can run Windows Xp in VirtualBox. Or emulate the Samsung smart-watch, or run Android emulator. So a lot of devices can be simulated or emulated, and you can try removing a line of code and re-run the test suit on all the virtual devices.
Lots of old code for obsolete devices is only one end of the problem! The other end is to make sure all those old devices still works when you make changes and add new features! There's no idea keeping and old fix if the app will crash on that device anyway.
The pile of high quality regression testing Google search has is miles above any other product I have worked on; but the number of possibly generated search result pages and the number of supported browsers conspire to make the pile of possible breakages even higher.
Even so, I felt better about removing code in that JavaScript codebase than just about any other I have contributed to, and frequently did. (My total line count in JavaScript was negative for a while).
The problem was much less about the difficulty of it and more that it wasn't difficult or high enough impact for the amount of time you needed to spend to detect these cases and prove their safety. The low hanging fruit of stuff that already passed all the regression tests was often already scooped up and the piles of cruft all had one edge case with some test or were hard to trigger and confirm manually that it was fixed.
The most likely case is that Firefox <65 still doesn't work (browsers almost always only fix bugs in new versions), and the question is whether or not it's still worth supporting the portion of traffic on pre-2019 Firefox.
While I agree in theory, for a long time there was no easy way to test frontend code across multiple browser. That effort was simply too high for anyone.
Nowadays front-end testing is doable but still far from being a pleasant experience. I did it at some jobs, in others we just didn't think the benefits were worth the hassle.
Now that I'm building a small business my backend is very well tested while the front-end doesn't have tests.
This is partly because of the setup cost and partly because the development experience is so bad that I see our small company just throwing everything out of the window, keeping the html structure and rebuilding the thin UI with some better technology along the way (current stack is react + next + tailwind, we've been doing react for 5+ years)
Interesting! Could you say a bit about what you think might come next? Or from a different angle, the pain points you have that make you think a rewrite in a new stack is preferable?
I don't think we're waiting for some new concept that is missing. I've been hoping for someone to maintain some popular js library implementing real functional reactive programming and arrows (like https://hackage.haskell.org/package/auto), but I can live without it.
I'm just waiting for something polished, with a small, simple, codebase (not React with fibers, yes to something like solidjs, preact) with widespread types support (not Typescript and the quest for implementing types for every dependency), ideally not creating huge bundles (I like solidjs / svelte), with a core solution to manage state (I like Elm), ideally supporting CSS encapsulation and semantic css (I like CSS modules, MaintainableCSS), mainstream enough that I can hire people to work with without having to become a teacher.
I think Elm got 90% there, but it failed hard on the community side.
I'm thinking of moving to a Rust framework (eg. seed-rs) next as soon as they get popular enough and after checking whether the wasm bundle size make sense.
- Alternative browser engines have always fallen behind, eventually.
- Open-source is a huge win for most embedded applications.
- Having a big company backing something is a huge win too, since something like Webkit ain't going away.
I think one possible outcome here might be an acquisition. Microsoft was forced to eat crow with Edge adopting Google's engine. This would be an opportunity for Apple, Microsoft, or Amazon to leapfrog Google. A GPU or multicore-accelerated browser could make the iPhone/Macbook/etc. much more responsive than Chrome.
Another might be some open-source strategy, but I don't quite know what it might be.
"Adopting Google's engine" is an interesting turn of phrase. There's still more Apple-contributed code at the core of Blink than Google code. It's not like Google did a full rewrite when they forked Webkit to create Blink.
When you think about it, at this point the Chromium project (including the Blink engine) has "contributions" in some sense or another from Microsoft, Google, Apple, and KDE. Certainly, it's a project mostly run and stewarded by Google employees in their off-hours; but there are actually a lot of interests involved! (Especially now; I expect Microsoft has likely switched their Edge browser engineering team to to writing PRs for Chromium.)
I don't think your claim is entirely accurate. Google's blink and Webkit are probably very different after all these years. And even at the very beginning of Chrome, Google had to make quite a bit of changes because of Chrome's process / sandboxing model , which was the main reason for the fork later (also related adaptations for skia graphics engine and v8) See [1].
Besides html rendering is only a portion of what makes a browser. considering today's Chromium codebase as a whole, I would guess probably >80% of it was written by Google engineers (and certainly not in their off hours). Still I would consider it a successful open source project as now it has quite diverse contributors, (Non Google contributors now amount to ~30% [2], biggest are Microsoft, Igalia, Intel etc.).
To clarify: I implied that Chromium is run and stewarded in Google employees' off-hours, which is different from the project being developed in Google employees' off-hours.
Of course Google pays people to work on Chromium. But do they pay people to decide what makes it into Chromium?
If so, that's a very bad look for a FOSS project. One of the central points of having a separate standalone community-driven FOSS project in the first place (rather than a corporate "source-available and we accept PRs if you assign us the IP" project), is taking steering of the project away from the corporation that created the code, and placing it instead in the hands of the community. If Google employees serve in their capacity as Google employees as directors for the Chromium Project — and so Google can tell its employee-maintainers to reject a PR from Chromium because it's not good for Chrome — then how could a browser vendor producing a competitor to Chrome based on Chromium, trust the direction of Chromium to do what's best for all Chromium-based projects, rather than just Chrome?
I assume this is not the case; that Google employees not only don't direct the Chromium Project with backroom Google-internal decision-making, but are restricted legally from so doing. Though I can't find anything on the Chromium Project site to support that assumption...
I'm not sure I understand why you think Chromium is supposed to be community driven. Chromium is an open-source project that has a lot of Google and non-Google contributors (for example, Microsoft), but all of the main decision-makers are working on Chromium on behalf of their companies, and the majority of decision-makers are employed by Google.
I don't think Chromium is that kind of open source project. It doesn't seem that having the FOSS community leading the direction of dev is something they do. If you read some of the docs it seems that Googlers have special privileges and it's quite intertwined.
> If so, that's a very bad look for a FOSS project.
As much as I don't like to see Google having so much control over the dominant Web engine out there, being "community-driven" is completely orthogonal to FOSS, and Chromium is very clearly a Google project. You even have to sign a CLA before you can contribute to it: https://www.chromium.org/developers/contributing-code/extern...
As far as I can see the bigger problem isn't the code but a viable ecosystem, somewhere customers can go if Google doesn't behave. Let me explain:
As long as Firefox is alive and kicking Google cannot kill ad blockers on Chrome as they realize that will create an enormous backlash, massive PR and users flocking to Firefox.
Once Firefox is gone the network effects will be strong enough that they can eventually kill adblocking and people will be stuck with Chrome anyway.
Blame it on regulations or big media or whatever but I am certain they will do it if they get a chance.
Chromium source code might still be available but will not work on Gmail or Google Docs or Netflix, only on enthusiast websites :-/
Ad blockers are already dead on Android Chrome. I wish more people would use ad-blocking browsers, perhaps Firefox, but Fenix is somewhat slower than Chrome, and is a software train wreck of bugs.
What bugs? Ff had an upgrade not long ago where they made it difficult to use extensions, and the UI has been made worse (or I'm just old and set in my ways), but bugs? It just works. If it is slow, I never noticed.
I'm using Firefox Nightly, but many of the issues are present on release too:
- Saving images doesn't send the cookies, so you can't save images gated behind a login. (over 1 year old, not fixed, #17716)
- Switching tabs (by swiping the address bar) sometimes shows the old tab's contents/interaction, with the new tab's address bar. (unsure if reported, I should report it.)
- The menu shows a "sign in to sync" or similar prompt. When I click it, the settings screen shows me as logged in. When I close the settings screen, the menu shows me as logged in. (#19657, possibly #19036 too)
- Reopening a tab moves it to the very top of the list, before your oldest open tab. (fixed in nightly, #10986)
- Opening the tab menu scrolls you to a random place, rather than the currently open tab. (got fixed a few days ago. #20637, possibly #20960 too.)
- Reader mode randomly switches to default theme (fixed a few months ago, #17865)
The impression I get is that Firefox Mobile is shipping broken code and unwanted redesigns, and testing in production. Perhaps it's from a lack of engineering culture and management. Looking at issues like https://github.com/mozilla-mobile/fenix/pull/21038, I feel Mozilla is more interested in marketing, design, and analytics, than solid engineering. It's nauseating and depressing to see Mozilla fall so far.
It's funny, because I had the same "what bugs?" response as the parent. After reading your list, I realized that we just use the browser very differently.
I never save images from the web on my phone. I didn't even know you could switch tabs by swiping the address bar (I always open the tab menu). I'd never noticed the "sign in to sync" issue (just checked and I do see the same behavior you see), but it's just a harmless display bug IMO, that doesn't affect functionality. I've never seen the issue where opening the tab menu scrolls to a random place. I rarely use reader mode on mobile, and haven't noticed the theme issues.
The only one I've seen from your list is where reopening a closed tab moves it to the top of the list. But I do that so infrequently that it doesn't bother me.
Agree about Mozilla's engineering culture, in their mobile division at least. It's startling to think that literally the only alternative to a Blink monoculture on Android is a poorly-managed alternative. I now see myself as a holdout of Firefox on desktop, because the constant issues and quagmire of inexplicable UI changes compelled me to move to Brave on mobile.
As an example, the GeckoView ticket[1] that #17716 depends on has been constantly pushed back from v88 to possibly v93 or later over the past year. Being able to save an image that I am able to see on a webpage is what I consider basic functionality, but there does not seem to be much of an acknowledgement from the GeckoView team about its importance.
Another example is this[2] issue report I submitted. It was closed because they "couldn't address it", and even after providing a video showing the exact problem, my report was still ignored. The issue is that it does not matter whether or not they consider it a problem, because I do, and Chromium does not have the same problem. That only makes me more likely to choose Chromium over Firefox. Compound that a dozen times over, and for me, practicality wins out over principle.
I'm also worried about the long-term stability of their mobile division - not only because of Servo, but because issues related to the mobile team being understaffed and overworked had come to light in at least once instance in the past. Mobile web browsers are becoming far too important to get wrong in the present day, and after realizing this fact, I have to wonder why Mozilla's financial structure still prevents direct contributions to their development teams for Firefox, and why they are still using what resources they do have on far less important projects like Mozilla VPN or Relay.
I believe it was fixed a while ago, but I had pull to refresh disabled as a workaround at the time (when they finally made it configurable).
The pull-to-refresh implementation is a prime example of a buggy feature that was pushed out too early and only underwent sufficent testing only after it reached users. Some of the issues stemming from it still weren't fixed even a year afterwards, and they appeared in places as critical as Google's search results page.
> The pull-to-refresh implementation is a prime example of a buggy feature that was pushed out too early and only underwent sufficent testing only after it reached users.
I found out that this may not be accurate. Looking at https://github.com/mozilla-mobile/fenix/issues/21175, pull-to-refresh is simply absent from release builds, and only present in nightly builds. I wish it worked properly there though.
> Ff had an upgrade not long ago where they made it difficult to use extensions
You're talking about the old extension API, which was more of a "do whatever the heck you want" than an actual API, at that point in time it was pure technical debt - Mozilla was on a clock to remove it, or Firefox could never hope to stay competitive. It was a major roadblock in enabling modern sandboxing / isolation, performance improvements, developer productivity, etc.
Can't you see they had no other alternative? They wouldn't lose the few die-hard users that absolutely "needed" their browser to do insane shit, because there was no other (maintained, modern) browser that could do all of that; and the sympathy of the remaining 99.999% of their user base was at stake. Whichever option Mozilla would go for, the 0.0001% would get the same thing - the browser you were OK with, eventually stops getting updates, becomes irrelevant, and dies.
> If it is slow, I never noticed.
Luckily we can rely on tooling, such as benchmarks and profilers, to gather actual empirical data, and use that to guide our decisions. Whatever may not be a noticeable difference on your system, could have an enormous impact on another. I remember switching from Chrome to FF right around Quantum, because it was finally usable on my hardware.
No, I'm talking about only supporting "trusted" extensions (all 17 of them, last I checked) unless you use nightly and install extensions through a predefined list.
> Luckily we can rely on tooling, such as benchmarks and profilers, to gather...
> I remember switching from Chrome to FF right around Quantum, because it was finally usable on my hardware.
What are you talking about? I'm on an ancient phone and FF is far from slow. Is Chrome faster by X percent? Who cares. Installing an ad blocker is all one needs^W^W I need to surf the net easily.
How can Safari be a defender of the free web, when iOS explicitly prohibits any other engine? They are the new IE, in that they keep users hostage, but this time you can't even run a campaign to try to get users to use a new browser.
> How can Safari be a defender of the free web, when iOS explicitly prohibits any other engine?
If you look at my post above you'll see that I mentioned that this is more or less voluntarily :)
Maybe it is not their intention at all but as long as they exist it makes it harder for Google to pull the carpet underneath the free web and ad blockers in particular.
> They are the new IE, in that they keep users hostage, but this time you can't even run a campaign to try to get users to use a new browser.
No. Safari might be annoying in a number of ways, but the "new IE" title goes to Chrome.
IE became the old IE not because they lagged behind from the start but because they "innovated" new non-standard features until they had crushed competition, then stopped development until their market share was rapidly shrinking because of both Firefox and Chrome.
If you don't think Google will stop Chrome development as soon as they have crushed competition and "sadly" had to kill adblock then we have different views of Google.
(Please note: I don't think individual engineers at Google will push this agenda but I am fairly sure that "maximize shareholder profit" will over the course of a few months or a couple of years push this agenda with a weight that will easily crush even the most idealistic team. See WhatsApp for a case study of how a company and an engineering culture I loved was crushed.)
For phones/tablets, Apple doesn't want anyone using the web for apps, insisting on app store (and fees) for all transactions.
If Google came along with a deal to switch to Chrome, would modern, post-Jobs Apple go along?
Unless something had changed, they already accept cash to funnel Safari users to Google by default. And Google apps somewhat compete with Apple products.
So... why not more cash, and make Chrome Safari's guts?
Because Safari's guts weakens the web as an app platform and strengthens their App Store, a favorable outcome for Apple. Adopting Chrome as the guts of Safari would defeat that benefit, and it'd be pretty hard to regain it. Safari's current "guts" - Webkit - can veto web hardware APIs and slow down broad support for PWA features, things that would help web apps compete with the App Store.
Post-Jobs Apple is the company that was willing to replace Google Maps with their own app to gain more independent control from Google, and they were much further behind when they did that. I can't imagine that company ceding influence & control over their largest platform threat by ditching Webkit. That's my 2c of speculation.
To be clear, what I meant is that projects with open stewardship (which I believe the Chromium Project is? unclear from the project website) don't tend to want employees of companies, in their capacity as employees-of-companies to be directors/core maintainers for the project. Which is different from being regular developers on those projects.
Open-stewardship projects tend to be happy to accept contributions from companies; and they tend to be happy to accept the stewardship of individuals who happen to work at companies; but they don't want to be beholden to the interests of those companies in their direction, so projects with community/foundation stewardship don't usually allow companies to pay their employees for their time spent sitting on that foundation's board. (I.e. they let companies pay employees to write code, but they won't allow companies to pay employees to do the work of deciding whether that code belongs upstream.)
Instead, the software foundations that manage FOSS projects usually legally restrict corporations or their representatives from participating in their capacity as representatives of corporations in directorship/steering committees/etc. for the foundation. Instead, they expect/require each person with decision-making authority in the foundation to have their own individual voice—to not be just a sock-puppet of a company, saying whatever the company wants you to say. When you vote for things in the foundation, you have to be able to vote for the interests of the project itself, even if those interests are against the interests of the company you work for—without that endangering your job.
Which usually means that employees of such companies must do their foundation maintainership "off the books" of the company they work for, e.g. at non-work hours using non-work equipment. Just as if they were trying to avoid IP cross-pollution.
Source: worked at a company with an "open core" product, where the core was an Apache Software Foundation project, and most of the ASF project's maintainers happened to be employees of the company. Those people could push a PR to the ASF upstream for consideration from their work account, as a representative of the company; but they then had to don their personal-gmail-account, separate-profile, no-corporate-affiliation hat to handle the PR and discuss it with the other maintainers.
And they haven't undergone huge growth to make Flow possible, so I don't think there's any reason to believe they'll be less viable now than they have historically been.
In matters of GPU and multithreading, it’s not a case of “even Gecko”—in these fields, Firefox is leading the pack among desktop browsers by a very considerable margin. Firefox is the only one with actual GPU rendering (via WebRender), and I haven’t heard of any competitors even starting on doing the same, which will take them years. As for advantageous use of multithreading, Firefox has led the way here with other parts of the Quantum project (again driven by stuff that was incubated in Servo), though I have a feeling Chromium has also been steadily doing more too; I think parts of LayoutNG might be multithreaded?
Maybe someone benevolent could buy it. Probably not Mozilla, but what about a big FOSS outfit like GNU or Apache or Raspberry Pi. I think I'd even be willing to break my embargo on crowdfunding to promote such a crucial bit of the web.
Firefox is great, but I don't want to put all my chickens in that basket. WebKit is alright but has many drawbacks documented elsewhere. Any other engines are too broken on the "modern web" that society demands we use. A healthier ecosystem will less dependence on Google would be good for just about everyone.
None of those (GNU, Apache, rPi) ever bought anything, they get projects donated to them, but even then they lack any real resources to invest into such huge projects.
Flow is yet another project pulling on open source resources, and the browser market created from open browsers, and trying to privatize it. Imagine if google could just make whatever internal changes to chromium and nobody knew about it.
> "yet another project pulling on open source resources"
Erm, yeah. Isn't that the point of making something open source with a permissive license? So that other people are able to use it in their own projects?
Agreed - I feel like 15 or 20 years ago, GPL was a well-known license so that's just what people went with, without understanding all of the implications.
Legally for sure. But from an ecosystem perspective, it's reasonable to be concerned about people whose relationships to a commons is essentially extractive. When I use open source in commercial offerings, I see it as both morally appropriate and good business to contribute back in one way or another.
From my perspective as an open-source consumer, that doesn't matter much. The license tells me what I can legally get away with. But what I care about pragmatically is whether that project will keep existing and improving such that it will meet my needs down the road. And what I care about morally is maintaining positive-sum relationships with my community and society.
> yet another project pulling on open source resources
And? It's a product people find useful enough to pay for.
Open source is by default provided to the world freely. If someone is opposed to it being use freely, in any way the person on the other side chooses, including commercially, there are licenses that can proscribe that.
One side effect, intentional or not, that came from Google naming their browser "Chrome" was that Gecko development become more difficult. It became too hard to get search engines (you know the one) to understand that you weren't looking for Google blog posts and press releases, but instead developer documentation for Mozilla internals, which you could have pretty reliably found previously by including "chrome" as a useful search term.
Yes, there's code in Chromium going back to the KDE/ Konqueror and WebKit days that's LGPL licensed. Mostly in the Blink rendering engine like you linked.
Just to clarify the relevant thing is whether Google owns the copyright to Chromium source, and it does for a huge part. It can license that code however it wants, including open source, or no license at all.
Chrome/Chromium also uses a ton of open source libraries for which Google does NOT hold the copyright, but all of them, per their license, can be linked with closed source code and distributed (i.e. they are not GPL/copyleft).
Words can not express the relief it is to see a completely new web browser rendering engine being developed. It's heartbreaking that it's proprietary, my initial wish was that they'd open up after it matured a bit. Still hopeful though, the browser landscape needs diversity and preferably in the form of something that isn't carrying 3 decades worth of baggage.
It is quite reasonable for it to be proprietary as developed by a small team that still needs to see return on its investment. Last thing they need is someone with a pile of cash and/or reach taking their engine and launching a new browser with it, stealing their thunder.
Monetizing a browser is not easy if you are not in the data-monetization game.
> Monetizing a browser is not easy if you are not in the data-monetization game.
How do we know that they aren't?
It's like free money for a tech company. There's little or no disincentive, and they don't have to do a lot of extra work - the data is already sitting around waiting to be collected.
As much as I'm irked by language partisanism, that's typically justified on the basis of ease of installation or attracting developers. Since this otherwise great looking product is a closed source binary then it would be hard to justify.
(Although I bet it's not in Rust, because then they'd surely say it anyway.)
I used to develop Set Top Box user interfaces that used the Ekioh browser. We'd use SVG, JavaScript and CSS to get native UI performance in a browser that was running in 256MB ram. Their later versions enabled us to use HTML5 and CSS3. I enjoyed the challenges of balancing UI animation against capturing keypresses and other events as well as executing JavaScript fetched via AJAX. Due to the memory constraints we couldn't use any JavaScript frameworks and wrote everything in vanilla JS.
Our C and C++ developers would expose the native hardware functionality up into the JavaScript Ekioh engine so that we could access and control features like scanning cable TV frequencies and recording to disk.
The cable and satellite TV networks would end up paying a licence per instance of the browser running on each of their customers set top boxes.
Curious what does your question imply? DuckDuckGo, Google, Safari, Instagram are all examples of closed source software and have no problems reaching users/customers.
All the examples you gave are end user applications. Closed source software meant for developers/professionals has a risk ranging from mild inconvinience like not being able to customize something to severe problems like not being able to get any work done today because the license servers are down.
I previously worked on JS infra at Google. Google uses a lot of shared JS libraries that date back over a decade, and those libraries have accumulated lots of workarounds for browsers that nobody cares about anymore. (One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!)
It's very difficult to remove an old workaround because
1. it's hard to be completely sure nobody actually depends on the workaround (especially given the wide variety of apps and environments Google supports -- Firefox aside, Google JS runs on a lot of abandonware TVs), and
2. it's hard to prioritize doing such work, because it's low value (a few less bytes of JS) and nonzero risk (see point 1) without meaningfully moving any metrics you care about.
In all, how to eliminate accumulated cruft like is a fascinating problem to me in that I can't see how it ever gets done. And it's not a Google thing. Even the newer cooler startup I now work at has similar "work around old Safari bug, not sure if it's safe to remove" codepaths that I can imagine will stick around forever, for similar reasons.