Hacker News new | past | comments | ask | show | jobs | submit login
Google Docs in a clean-room browser (ekioh.com)
414 points by sohkamyung on Sept 20, 2021 | hide | past | favorite | 150 comments



> seems to compare the Firefox version number with 65 ... I’ve no idea what the purpose of this is

I previously worked on JS infra at Google. Google uses a lot of shared JS libraries that date back over a decade, and those libraries have accumulated lots of workarounds for browsers that nobody cares about anymore. (One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!)

It's very difficult to remove an old workaround because

1. it's hard to be completely sure nobody actually depends on the workaround (especially given the wide variety of apps and environments Google supports -- Firefox aside, Google JS runs on a lot of abandonware TVs), and

2. it's hard to prioritize doing such work, because it's low value (a few less bytes of JS) and nonzero risk (see point 1) without meaningfully moving any metrics you care about.

In all, how to eliminate accumulated cruft like is a fascinating problem to me in that I can't see how it ever gets done. And it's not a Google thing. Even the newer cooler startup I now work at has similar "work around old Safari bug, not sure if it's safe to remove" codepaths that I can imagine will stick around forever, for similar reasons.


The longer I work in software, the more I see parallels to DNA. Yeah, sure this parasitic chunk of blueprint seems like it doesn't do anything, but it's not harming survival of the species, and hey, maybe it will serve some important purpose to the propagation of the larger supporting organism somewhere in the long long tail of probability.

I've also adopted this truism, to help me away from pre-mature refactorings that I am prone to do: Clean code will either be discarded, or it will live long enough to get accidentally mutated into a mess that nobody wants to touch.


My biologists friends on twitter were all up in the air about a "Vault Organelle"[0] the other day. Knocking out this organelle doesn't seem to do anything, but everyone seems to agree it must have some corner case function to still be around.

The vault is not a new discovery, but since it doesn't exist in yeast or fruit flies it has somehow not gotten a lot of attention...

[0]: https://en.wikipedia.org/wiki/Vault_(organelle)


Hah! I had the same thought. I proposed that after Y2K we shift all the old COBOL programmers to decoding the human genome, as 30-year-old code bases are the closest thing we have.

In a few years I should make the same proposal but for all the Enterprise Java developers.


Python 3 was an attempt to remove cruft but it was drawn out and somewhat painful.

Apple on the other hand has managed to reinvent itself successfully several times by controlling its vertical tightly, and each time it does it there’s a natural shedding of cruft. I remember every single architectural move — 68k to PPC to Intel and now to ARM, executed impeccably each time. Moving a developer ecosystem with the times is truly one of the harder things but Apple seems to have managed to pull it off each time.


> Apple on the other hand has managed to reinvent itself successfully several times by controlling its vertical tightly, and each time it does it there’s a natural shedding of cruft. I remember every single architectural move — 68k to PPC to Intel and now to ARM, executed impeccably each time.

Wasn't there a recent Apple zero day due to some 56K modem code that wasn't coded correctly and had stuck around until now?

edit: https://www.wired.com/story/apple-modem-bug-since-1999/


That is not really the same. I wouldn't be surprised if Mac OS is still full of PowerPC specific code that just isn't easily identifiable as such.


It's not. NeXT was already multiplatform in the 90s.


NeXT was, but Carbon and its remnants were not produced at NeXT. Carbon is long gone, but those remnants still exist in iTunes/the various apps split off from it.

And being multiplatform doesn’t suggest the absence of platform-specific code. I’ve hypothesized recently (to some chagrin) that Apple probably still maintains a skunkworks version of macOS for PPC, as insurance. It would be silly if they didn’t, given the history. So, probably yeah there’s a bunch of PPC code in macOS, but I’d bet it’s generally quite identifiable.


I don't know why they would do that now. They're doing their own ARM thing, and the contingency would surely be more x86, even if not from Intel.


I don’t know why there would be a “the” contingency, but we know they’re also hiring for active RISC-V engineers. It’s definitely not obvious the field is as limited as that.


Yeah. Makes no sense. I loved PPC (and 68k before that) but it’s a dead platform that Apple has no interest in.


The Power ISA recently hit the front page here. It’s not a dead platform, it’s just mostly server focused.


It’s not strong on embedded, mobile or desktop, debunked by x86 and now ARM on super computer, cloud and enterprise. If it’s not dead, it’s on life support.


ARM wasn’t strong on many of the platforms it’s now running and will be soon. Apple has historically backed weak hardware platforms both to a fault and to astonishing success. Part of the way they did that was maintaining cross ISA builds internally for platforms no one would bet on.


ARM has been on the rise since its creation. Gaining market share and entering new segments.

PPC has ben bleeding costumers since Apple left the boat. Even game consoles gave up. With RISC-V on the table, competition is even harder these days.


I bet you there isn’t. It would have to be emulated, which would be too easy to spot. And there’s also no need. They’ve been using a much higher level tool chain for decades. There’s plenty of legacy code, sure, but no PPC. Rest assured.


Why would it be emulated?


Maybe we’re not taking about the same thing. You’re saying there’s PPC code running on current macOS. If it’s running on recent hardware, it’s running emulated, since Apple hasn’t shipped a PPC machine in more than a decade.


I’m saying the exact opposite: that there’s likely PPC source code in macOS still maintained just in case. I really doubt all of the Carbon remnants are ISA specific, the point of bringing that up was that macOS’s roots are not entirely NeXT and things that still exist are based on APIs largely from classic Mac OS.


I really doubt they are maintaining a PPC fork. It's not a trivial effort and it would be hard to justify the investment and even harder to motivate the talent needed.

ISA specific code is restricted to kernel and drivers. What's left of Carbon has been through 3 transitions (PPC to x86, x86 to x64 only, x64 to ARM). It's ISA clean all right.


The linux kernel is multiplatform, and is full of ISA-specific code. They are not exclusive.


In case of web apps/sites you can easily find truly dead code by inserting instrumentation/reporting in all the old workarounds and collecting it over some reasonable period of time.


Next week on hacker news "My 5 year old TV just started sending data to Google servers!"


"Yeah, we put in this special case to handle the alternate logic during leap years, but someone refactored it out!"


It works the same with bureaucracy in governments too. Stuff is too complex, so rather extend than refactor.


Nature is experimenting all the time though. Damage to DNA is random and stable and mutable parts of the DNA have an equal chance to be impacted. Beningn mutations accumulate and non-viable ones don't persist for very long. Also, accidents during cell division can cause parts of the genome to be lost.

Sometimes more extreme changes happen during cell division when genomes are duplicated. Retroviruses that manage to reach the germ line can also have a huge impact. In these cases, the new genetic material can take over entirely new functions.

Human evolution is pretty slow because of a generation time of 20 to 30 years, but in more short-lived species such as fruit flies much more interesting things can be observed. Some of these indeed remind me of software development.



The alt text on this one is impressively glib, even coming from xkcd.


> In all, how to eliminate accumulated cruft like is a fascinating problem to me in that I can't see how it ever gets done. And it's not a Google thing. Even the newer cooler startup I now work at has similar "work around old Safari bug, not sure if it's safe to remove" codepaths that I can imagine will stick around forever, for similar reasons.

I think this is because it's not purely a technology problem. Especially if you have enterprise customers, the real question you are asking is: by removing support for these specific browsers, are you breaking someone's important workflow in a way that lacks viable IT workarounds? Because if you are, the backpressure via business channels will be what forces you to keep the old cruft in the code.

In a previous job, I had to explicitly keep tabs on certain customers' IT policies w.r.t. browsers because that would ultimately inform our browser support matrix, and because it's enterprise, the actual browser versions could lag by 5 years or more. And when a single enterprise user is stuck on IE9 but the account is worth tens of thousands of dollars to your nascent startup, starting a fight with their IT department is one of the last things you want to risk customer goodwill on.

That's why I was goddamn ecstatic about Microsoft's move to Edge, because it meant historically stuck-on-Trident businesses now had a path to supporting more up-to-date browser tech on a much faster cadence.


Always a challenge trying to explain to a paying customer that "they're doing it all wrong", if you want them to stay a paying customer.


Exactly—it’s a business issue. There are likely SLAs for support Roku or Samsung devices as well as the enterprise fleet you mentioned.


> One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!

My oh my, that takes me back to the days of quirks mode and early versions of Internet Explorer. I made a good living through college helping design and ad agencies backport stuff to IE5/6/7 and became intimately familiar with the lack of event bubbling support and fan favorites like:

1. IE displaying a blank page with no errors whenever a CSS file of more than 4kb was loaded

2. Resolving CSS rendering issues between IE5/6/7 due to differing rendering strategies for the CSS box model

3. Debugging javascript in IE back when dev tools didn't exist.

For all people harp on the current state of the web, we have come a long, long way.


> IE displaying a blank page with no errors whenever a CSS file of more than 4kb was loaded

This must predate IE5 right? I’m absolutely certain I was shipping larger single CSS files without hitting this.


I may be misremembering the file size (8kb? 32kb?), but I distinctly remember this being the bane of my existence for IE6.

EDIT: my memory was faulty—I was conflating two bugs:

1. Certain CSS expressions on html or body tags could cause pages in IE6 and 7 to crash

2. In IE6, the browser would happily parse large CSS files until ~288kb of CSS had been parsed and applied. Any remaining CSS would be silently ignored and discarded. See: http://joshua.perina.com/africa/gambia/fajara/post/internet-...


This sounds a lot more familiar! Good find.


> and nonzero risk (see point 1) without meaningfully moving any metrics you care about. In all, how to eliminate accumulated cruft

Ironically, the other side which is Chrome can be quite blasé about this. See the recent alert/confirm/prompt kerfuffle https://dev.to/richharris/stay-alert-d


> One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!

MSIE only supported bubbling, Netscape 4 only supported capturing. The DOM Level 2 model (which combined both) started getting supported around IE5 / NS6 / Mozilla, though support would remain spotty for a while especially on the IE side.

Microsoft's event model also worked off of a global (`window.event`) for the event information rather than a callback parameter which was fun.

And there was no "current target" passed to the callback (or available on `window.event`), which meant you had to keep a handle on the event target from your callback, which caused a cycle between the DOM and Javascript, which created a memory leak by default: IE's DOM was a COM API, while JScript was its own separate non-com runtime. By creating a cycle between the two, the JS handle would keep a COM refcount > 1 on the page's DOM, and the event target would (through the event handler) keep a foreign JScript handle, and then neither could be collected.

And because IE's COM instance was per-process the leak would live until you closed IE itself, every reload of the page would create a new document in COM, an event handler in JScript, and leak the entire thing. You had to explicitly break the cycle by hand by detaching your events during e.g. onunload.

In fact there was a tool called "drip" whose sole purpose was to open a page in an HTML control, then loop reloading the page and counting COM objects. If the number increased, you had a leak (almost certainly due to an undetached handler), and now… you had to find where it was.

edit: and the state of tooling at the time was… dismal is too kind a word. This was before Firebug and console APIs, so your tools were either popping `alert` for debugging (this was also before JSON support in browser so serializing objects for printing was neat) or installing a debugger, which were universally shit and only handled *javascript debugging:

Mozilla had Venkman, which was dog-slow and had a weirdly busy UI.

Microsoft had either the Script Debugger or Visual Studio's debugger. SD was extremely brittle and would crash if you looked at it funny while VS was heavyweight and expensive. SD was also completely unable to debug scripts at the toplevel (of JS files or <script> tags), it only worked inside functions. I don't think VS supported that either but I don't remember as clearly so it might have. The best part of that was that you could set breakpoints at the toplevel, they just wouldn't ever trigger.


I totally forgot all the time I used to invest back in the day in working around those IE quirks. At one point I drew stats and figured out we spent a quarter of our frontend development budget on IE compatibility. Thanks for the memories.


Just a quarter?


It’s kind of believable. Not to sound all old, but before we had preprocessors and build tools, we memorized all the quirks and hacks. It was common to spend a lot of effort on compat, but most of it for weird edge cases that weren’t well known, or for being ambitious about moving the web forward before it was reliable to do.


Having painful flashbacks. Script Debugger was sooooo useless.


I got a shudder when you mentioned drip. Oy!


This isn't really a "state of the web these days" complaint, it's what happens any time you ship cross-platform code. Look at any reasonably mature C/C++ codebase and you'll see plenty of '#ifdef _linux...#ifdef _OpenBSD...'.

Having complete control over the environment your code runs in is the exception, not the rule, though the modern trend for SaaS might make you think that backend code looks cleaner than the frontend.


Plus how many people even get promoted for painstakingly removing JavaScript that wasn’t needed in the first place?


As a person who enjoys doing this kind of cleanups, it is disappointing. But when I think of the larger business perspective, these kinds of cleanups are difficult to justify, in that the value of them is hard to quantify.

Overall the net effect is a kind of death by a million cuts, but each of these cuts individually is a decent amount of work to clean up and fix that doesn't itself move any needles.

My latest perspective on this is that the only renewing force is the also the one found in nature, which is that you occasionally need to burn the whole forest down or have the organism die, so that you can start again afresh. In a business setting this means using a new stack in a new app.


1) Write it from the beginning in such a way that it can be removed easily. Ideally a single chunk of code, an entire file, etc.

2) Put a big comment on it that says "HACK: Remove when X is no longer the case"

3) Periodically check X when you think about it or come across that code

If X is "users' browsers have collectively reached a certain milestone", that should be easy to measure for via logs


The trouble with a lot of this stuff is that it happened in the distant past when the end state wasn’t clear; hindsight says you should have labelled the hack with the conditions that required it, but that wasn’t at all obvious then: you often didn’t know what might change to make your hack unnecessary or counterproductive.

Nowadays, browser development is fairly principled so that you can express things like that—or better still, polyfill—but in the days of long ago you didn’t know who or what was going to win, and where browsers deviated it wasn’t a matter of comparing behaviour to spec and saying “this one is wrong” (which is normally how things will go these days), but rather… well, they’d probably keep on being different for a while, but maybe at some point one would cave and change to be more like the other, or maybe they’d change to something else altogether.

And because many or most situations were like this, people never got into the habit of annotating such things even in cases where it was possible (e.g. once addEventListener had won, any attachEvent usage because you supported old IE should have been marked accordingly as unnecessary once IE9 was the baseline).

They were the days of the wild west when frontend web development was far more ad hoc than principled.


> Put a big comment on it that says "HACK: Remove when X is no longer the case"

You know, every now and then I come across some comment like that from 6 years ago that is clearly no longer applicable and I go and remove the hack... feels really good, but I don't think anyone will just go around looking for these trying to get rid of hacks as there's ALWAYS more important things to do... the hack will probably remain mostly harmless there for decades to come :D so why would we spend time on that other than by happenstance, like I've just mentioned?!


I mean it's really satisfying to do, and if the originator did make the thing easy to remove, then it doesn't take away much time from other priorities


For letters "p" and "z", this strikes me as likely related to print and undo functions.


all the letters between p and z. pqrstuvxwyz


Like the Lotus 1-2-3 date workaround for Excel: https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...


Reminds of a previous job where some legacy code had checks for Netscape 4. The code was written in 2003 and was still running in 2016, maybe 2017, but not any more.


> not sure if it's safe to remove

That's a symptom of companies spending all their money on building new features without also spending money on regression testing.

If you build a feature to work around a third party then surely you have that third party on-hand to build automated tests against and ensure that something didn't break and/or is still needed? If not then you're not writing new features; you're writing hacks. And you are perpetuating the same problem on other people.


Not always true. Especially for something running on client devices. You can't expect an app maker to own every possible device their code is going to run on.

Now maybe Google could do something like that, but 99% of people couldn't.

There's also, in this case, a question of if anyone is still using the device/browser/whatever. It sounds like they know removing the workaround will in fact break that use case, they just don't know if they should care or not.

The third party in this case might be Grandpa Joe. You can't exactly ask him to use the beta version of X and see if it's broken. Or if he's still on Firefox vOld.ancent.


It used to be that when someone got a fancy new device, like an iPhone you just said: - If you want me to fix the issues, send me an device.

Now alot can be emulated. I can for example start Xcode simulator and pick the device someone have issues with. Or I can run Windows Xp in VirtualBox. Or emulate the Samsung smart-watch, or run Android emulator. So a lot of devices can be simulated or emulated, and you can try removing a line of code and re-run the test suit on all the virtual devices.

Lots of old code for obsolete devices is only one end of the problem! The other end is to make sure all those old devices still works when you make changes and add new features! There's no idea keeping and old fix if the app will crash on that device anyway.


The pile of high quality regression testing Google search has is miles above any other product I have worked on; but the number of possibly generated search result pages and the number of supported browsers conspire to make the pile of possible breakages even higher.

Even so, I felt better about removing code in that JavaScript codebase than just about any other I have contributed to, and frequently did. (My total line count in JavaScript was negative for a while).

The problem was much less about the difficulty of it and more that it wasn't difficult or high enough impact for the amount of time you needed to spend to detect these cases and prove their safety. The low hanging fruit of stuff that already passed all the regression tests was often already scooped up and the piles of cruft all had one edge case with some test or were hard to trigger and confirm manually that it was fixed.


The most likely case is that Firefox <65 still doesn't work (browsers almost always only fix bugs in new versions), and the question is whether or not it's still worth supporting the portion of traffic on pre-2019 Firefox.


One of the things that shocked me is after installing a fresh windows XP VM, I was able to use the built in IE to use google and it all just worked..

If this is something that actually should work is another question but it's quite impressive that it does work.


While I agree in theory, for a long time there was no easy way to test frontend code across multiple browser. That effort was simply too high for anyone.

Nowadays front-end testing is doable but still far from being a pleasant experience. I did it at some jobs, in others we just didn't think the benefits were worth the hassle.

Now that I'm building a small business my backend is very well tested while the front-end doesn't have tests. This is partly because of the setup cost and partly because the development experience is so bad that I see our small company just throwing everything out of the window, keeping the html structure and rebuilding the thin UI with some better technology along the way (current stack is react + next + tailwind, we've been doing react for 5+ years)


Interesting! Could you say a bit about what you think might come next? Or from a different angle, the pain points you have that make you think a rewrite in a new stack is preferable?


I don't think we're waiting for some new concept that is missing. I've been hoping for someone to maintain some popular js library implementing real functional reactive programming and arrows (like https://hackage.haskell.org/package/auto), but I can live without it.

I'm just waiting for something polished, with a small, simple, codebase (not React with fibers, yes to something like solidjs, preact) with widespread types support (not Typescript and the quest for implementing types for every dependency), ideally not creating huge bundles (I like solidjs / svelte), with a core solution to manage state (I like Elm), ideally supporting CSS encapsulation and semantic css (I like CSS modules, MaintainableCSS), mainstream enough that I can hire people to work with without having to become a teacher.

I think Elm got 90% there, but it failed hard on the community side. I'm thinking of moving to a Rust framework (eg. seed-rs) next as soon as they get popular enough and after checking whether the wasm bundle size make sense.


Technically, color me impressed.

Business-wise, I'm skeptical:

- Alternative browser engines have always fallen behind, eventually.

- Open-source is a huge win for most embedded applications.

- Having a big company backing something is a huge win too, since something like Webkit ain't going away.

I think one possible outcome here might be an acquisition. Microsoft was forced to eat crow with Edge adopting Google's engine. This would be an opportunity for Apple, Microsoft, or Amazon to leapfrog Google. A GPU or multicore-accelerated browser could make the iPhone/Macbook/etc. much more responsive than Chrome.

Another might be some open-source strategy, but I don't quite know what it might be.


"Adopting Google's engine" is an interesting turn of phrase. There's still more Apple-contributed code at the core of Blink than Google code. It's not like Google did a full rewrite when they forked Webkit to create Blink.

When you think about it, at this point the Chromium project (including the Blink engine) has "contributions" in some sense or another from Microsoft, Google, Apple, and KDE. Certainly, it's a project mostly run and stewarded by Google employees in their off-hours; but there are actually a lot of interests involved! (Especially now; I expect Microsoft has likely switched their Edge browser engineering team to to writing PRs for Chromium.)


I don't think your claim is entirely accurate. Google's blink and Webkit are probably very different after all these years. And even at the very beginning of Chrome, Google had to make quite a bit of changes because of Chrome's process / sandboxing model , which was the main reason for the fork later (also related adaptations for skia graphics engine and v8) See [1].

Besides html rendering is only a portion of what makes a browser. considering today's Chromium codebase as a whole, I would guess probably >80% of it was written by Google engineers (and certainly not in their off hours). Still I would consider it a successful open source project as now it has quite diverse contributors, (Non Google contributors now amount to ~30% [2], biggest are Microsoft, Igalia, Intel etc.).

[1] Slides from 2013 https://events.static.linuxfound.org/sites/events/files/slid...

[2] BlinkOn 14 keynote: https://youtu.be/VrEP7SPfQVM?t=408


> and certainly not in their off hours

To clarify: I implied that Chromium is run and stewarded in Google employees' off-hours, which is different from the project being developed in Google employees' off-hours.

Of course Google pays people to work on Chromium. But do they pay people to decide what makes it into Chromium?

If so, that's a very bad look for a FOSS project. One of the central points of having a separate standalone community-driven FOSS project in the first place (rather than a corporate "source-available and we accept PRs if you assign us the IP" project), is taking steering of the project away from the corporation that created the code, and placing it instead in the hands of the community. If Google employees serve in their capacity as Google employees as directors for the Chromium Project — and so Google can tell its employee-maintainers to reject a PR from Chromium because it's not good for Chrome — then how could a browser vendor producing a competitor to Chrome based on Chromium, trust the direction of Chromium to do what's best for all Chromium-based projects, rather than just Chrome?

I assume this is not the case; that Google employees not only don't direct the Chromium Project with backroom Google-internal decision-making, but are restricted legally from so doing. Though I can't find anything on the Chromium Project site to support that assumption...


I'm not sure I understand why you think Chromium is supposed to be community driven. Chromium is an open-source project that has a lot of Google and non-Google contributors (for example, Microsoft), but all of the main decision-makers are working on Chromium on behalf of their companies, and the majority of decision-makers are employed by Google.


I don't think Chromium is that kind of open source project. It doesn't seem that having the FOSS community leading the direction of dev is something they do. If you read some of the docs it seems that Googlers have special privileges and it's quite intertwined.


> If so, that's a very bad look for a FOSS project.

As much as I don't like to see Google having so much control over the dominant Web engine out there, being "community-driven" is completely orthogonal to FOSS, and Chromium is very clearly a Google project. You even have to sign a CLA before you can contribute to it: https://www.chromium.org/developers/contributing-code/extern...


The code will probably be available.

As far as I can see the bigger problem isn't the code but a viable ecosystem, somewhere customers can go if Google doesn't behave. Let me explain:

As long as Firefox is alive and kicking Google cannot kill ad blockers on Chrome as they realize that will create an enormous backlash, massive PR and users flocking to Firefox.

Once Firefox is gone the network effects will be strong enough that they can eventually kill adblocking and people will be stuck with Chrome anyway.

Blame it on regulations or big media or whatever but I am certain they will do it if they get a chance.

Chromium source code might still be available but will not work on Gmail or Google Docs or Netflix, only on enthusiast websites :-/


Ad blockers are already dead on Android Chrome. I wish more people would use ad-blocking browsers, perhaps Firefox, but Fenix is somewhat slower than Chrome, and is a software train wreck of bugs.


What bugs? Ff had an upgrade not long ago where they made it difficult to use extensions, and the UI has been made worse (or I'm just old and set in my ways), but bugs? It just works. If it is slow, I never noticed.


I'm using Firefox Nightly, but many of the issues are present on release too:

- Saving images doesn't send the cookies, so you can't save images gated behind a login. (over 1 year old, not fixed, #17716)

- Switching tabs (by swiping the address bar) sometimes shows the old tab's contents/interaction, with the new tab's address bar. (unsure if reported, I should report it.)

- The menu shows a "sign in to sync" or similar prompt. When I click it, the settings screen shows me as logged in. When I close the settings screen, the menu shows me as logged in. (#19657, possibly #19036 too)

- Reopening a tab moves it to the very top of the list, before your oldest open tab. (fixed in nightly, #10986)

- Opening the tab menu scrolls you to a random place, rather than the currently open tab. (got fixed a few days ago. #20637, possibly #20960 too.)

- Reader mode randomly switches to default theme (fixed a few months ago, #17865)

The tracker is at https://github.com/mozilla-mobile/fenix/issues.

The impression I get is that Firefox Mobile is shipping broken code and unwanted redesigns, and testing in production. Perhaps it's from a lack of engineering culture and management. Looking at issues like https://github.com/mozilla-mobile/fenix/pull/21038, I feel Mozilla is more interested in marketing, design, and analytics, than solid engineering. It's nauseating and depressing to see Mozilla fall so far.


It's funny, because I had the same "what bugs?" response as the parent. After reading your list, I realized that we just use the browser very differently.

I never save images from the web on my phone. I didn't even know you could switch tabs by swiping the address bar (I always open the tab menu). I'd never noticed the "sign in to sync" issue (just checked and I do see the same behavior you see), but it's just a harmless display bug IMO, that doesn't affect functionality. I've never seen the issue where opening the tab menu scrolls to a random place. I rarely use reader mode on mobile, and haven't noticed the theme issues.

The only one I've seen from your list is where reopening a closed tab moves it to the top of the list. But I do that so infrequently that it doesn't bother me.


Agree about Mozilla's engineering culture, in their mobile division at least. It's startling to think that literally the only alternative to a Blink monoculture on Android is a poorly-managed alternative. I now see myself as a holdout of Firefox on desktop, because the constant issues and quagmire of inexplicable UI changes compelled me to move to Brave on mobile.

As an example, the GeckoView ticket[1] that #17716 depends on has been constantly pushed back from v88 to possibly v93 or later over the past year. Being able to save an image that I am able to see on a webpage is what I consider basic functionality, but there does not seem to be much of an acknowledgement from the GeckoView team about its importance.

Another example is this[2] issue report I submitted. It was closed because they "couldn't address it", and even after providing a video showing the exact problem, my report was still ignored. The issue is that it does not matter whether or not they consider it a problem, because I do, and Chromium does not have the same problem. That only makes me more likely to choose Chromium over Firefox. Compound that a dozen times over, and for me, practicality wins out over principle.

I'm also worried about the long-term stability of their mobile division - not only because of Servo, but because issues related to the mobile team being understaffed and overworked had come to light in at least once instance in the past. Mobile web browsers are becoming far too important to get wrong in the present day, and after realizing this fact, I have to wonder why Mozilla's financial structure still prevents direct contributions to their development teams for Firefox, and why they are still using what resources they do have on far less important projects like Mozilla VPN or Relay.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1690037

[2] https://github.com/mozilla-mobile/fenix/issues/17525


I saw your previous post at https://news.ycombinator.com/item?id=28495622:

> Scrolling up inside an input box while the page is at the top of the screen causes unintentional pull-to-refresh

I saw this too, has it been reported?

> Most recently visited page is not restored when closing and reopening the app, even though it's saved to the history (closed as wontfix)

What's the issue URL? I want to read the history and maybe complain too.


> I saw this too, has it been reported?

I believe it was fixed a while ago, but I had pull to refresh disabled as a workaround at the time (when they finally made it configurable).

The pull-to-refresh implementation is a prime example of a buggy feature that was pushed out too early and only underwent sufficent testing only after it reached users. Some of the issues stemming from it still weren't fixed even a year afterwards, and they appeared in places as critical as Google's search results page.

https://github.com/mozilla-mobile/fenix/issues/9766

> What's the issue URL? I want to read the history and maybe complain too.

It's the GitHub link in my previous comment.


> The pull-to-refresh implementation is a prime example of a buggy feature that was pushed out too early and only underwent sufficent testing only after it reached users.

I found out that this may not be accurate. Looking at https://github.com/mozilla-mobile/fenix/issues/21175, pull-to-refresh is simply absent from release builds, and only present in nightly builds. I wish it worked properly there though.


Pull to refresh is not fixed; test at https://tasvideos.mistflux.net/.



> Ff had an upgrade not long ago where they made it difficult to use extensions

You're talking about the old extension API, which was more of a "do whatever the heck you want" than an actual API, at that point in time it was pure technical debt - Mozilla was on a clock to remove it, or Firefox could never hope to stay competitive. It was a major roadblock in enabling modern sandboxing / isolation, performance improvements, developer productivity, etc.

Can't you see they had no other alternative? They wouldn't lose the few die-hard users that absolutely "needed" their browser to do insane shit, because there was no other (maintained, modern) browser that could do all of that; and the sympathy of the remaining 99.999% of their user base was at stake. Whichever option Mozilla would go for, the 0.0001% would get the same thing - the browser you were OK with, eventually stops getting updates, becomes irrelevant, and dies.

> If it is slow, I never noticed.

Luckily we can rely on tooling, such as benchmarks and profilers, to gather actual empirical data, and use that to guide our decisions. Whatever may not be a noticeable difference on your system, could have an enormous impact on another. I remember switching from Chrome to FF right around Quantum, because it was finally usable on my hardware.


> You're talking about the old extension API,...

No, I'm talking about only supporting "trusted" extensions (all 17 of them, last I checked) unless you use nightly and install extensions through a predefined list.

> Luckily we can rely on tooling, such as benchmarks and profilers, to gather... > I remember switching from Chrome to FF right around Quantum, because it was finally usable on my hardware.

What are you talking about? I'm on an ancient phone and FF is far from slow. Is Chrome faster by X percent? Who cares. Installing an ad blocker is all one needs^W^W I need to surf the net easily.


I think rollcat is talking about Firefox Desktop, and everyone else is talking about Firefox Fenix on Android.


You really think google would walk away from all those safari users, especially on mobile, by forcing chrome? This seems … unlikely.


Sorry, I forgot, at the moment Safari might - more or less voluntarily - be a more important defender of the free web than Mozilla.

It just so happens that Mozilla and Firefox is still on top of my mind because of how good they were back before they changed management.


How can Safari be a defender of the free web, when iOS explicitly prohibits any other engine? They are the new IE, in that they keep users hostage, but this time you can't even run a campaign to try to get users to use a new browser.


Late but I'm on the train on my way home now so:

> How can Safari be a defender of the free web, when iOS explicitly prohibits any other engine?

If you look at my post above you'll see that I mentioned that this is more or less voluntarily :)

Maybe it is not their intention at all but as long as they exist it makes it harder for Google to pull the carpet underneath the free web and ad blockers in particular.

> They are the new IE, in that they keep users hostage, but this time you can't even run a campaign to try to get users to use a new browser.

No. Safari might be annoying in a number of ways, but the "new IE" title goes to Chrome.

IE became the old IE not because they lagged behind from the start but because they "innovated" new non-standard features until they had crushed competition, then stopped development until their market share was rapidly shrinking because of both Firefox and Chrome.

If you don't think Google will stop Chrome development as soon as they have crushed competition and "sadly" had to kill adblock then we have different views of Google.

(Please note: I don't think individual engineers at Google will push this agenda but I am fairly sure that "maximize shareholder profit" will over the course of a few months or a couple of years push this agenda with a weight that will easily crush even the most idealistic team. See WhatsApp for a case study of how a company and an engineering culture I loved was crushed.)


I wonder.

For phones/tablets, Apple doesn't want anyone using the web for apps, insisting on app store (and fees) for all transactions.

If Google came along with a deal to switch to Chrome, would modern, post-Jobs Apple go along?

Unless something had changed, they already accept cash to funnel Safari users to Google by default. And Google apps somewhat compete with Apple products.

So... why not more cash, and make Chrome Safari's guts?


Because Safari's guts weakens the web as an app platform and strengthens their App Store, a favorable outcome for Apple. Adopting Chrome as the guts of Safari would defeat that benefit, and it'd be pretty hard to regain it. Safari's current "guts" - Webkit - can veto web hardware APIs and slow down broad support for PWA features, things that would help web apps compete with the App Store.

Post-Jobs Apple is the company that was willing to replace Google Maps with their own app to gain more independent control from Google, and they were much further behind when they did that. I can't imagine that company ceding influence & control over their largest platform threat by ditching Webkit. That's my 2c of speculation.


> There's still more Apple-contributed code at the core of Blink than Google code.

Not sure about that. Even before the Blink fork, Google was the main contributor to WebKit. See the first two graphs at https://hypercritical.co/2013/04/12/code-hard-or-go-home


Also, a large part of the browser is the javascript engine, which I believe they do not share.


V8, the JavaScript engine used in Chrome, is fully open source. Many projects (including node.js) use it.


Maybe the parent comment meant that Chrome and Safari don't share the same JS engine. (As Safari uses JavaScriptCore aka. Nitro.)


>in their off-hours

Is that really the case, I always imagined they had a full time team on it.


That's not true, there's plenty of people working full time on Blink.

Source: I work at Google on the Chrome team.


They do. Google employs a lot of engineers on fully open source work including but not limited to Chromium.


They've the largest browser team, by a decent margin.


To be clear, what I meant is that projects with open stewardship (which I believe the Chromium Project is? unclear from the project website) don't tend to want employees of companies, in their capacity as employees-of-companies to be directors/core maintainers for the project. Which is different from being regular developers on those projects.

Open-stewardship projects tend to be happy to accept contributions from companies; and they tend to be happy to accept the stewardship of individuals who happen to work at companies; but they don't want to be beholden to the interests of those companies in their direction, so projects with community/foundation stewardship don't usually allow companies to pay their employees for their time spent sitting on that foundation's board. (I.e. they let companies pay employees to write code, but they won't allow companies to pay employees to do the work of deciding whether that code belongs upstream.)

Instead, the software foundations that manage FOSS projects usually legally restrict corporations or their representatives from participating in their capacity as representatives of corporations in directorship/steering committees/etc. for the foundation. Instead, they expect/require each person with decision-making authority in the foundation to have their own individual voice—to not be just a sock-puppet of a company, saying whatever the company wants you to say. When you vote for things in the foundation, you have to be able to vote for the interests of the project itself, even if those interests are against the interests of the company you work for—without that endangering your job.

Which usually means that employees of such companies must do their foundation maintainership "off the books" of the company they work for, e.g. at non-work hours using non-work equipment. Just as if they were trying to avoid IP cross-pollution.

Source: worked at a company with an "open core" product, where the core was an Apache Software Foundation project, and most of the ASF project's maintainers happened to be employees of the company. Those people could push a PR to the ASF upstream for consideration from their work account, as a representative of the company; but they then had to don their personal-gmail-account, separate-profile, no-corporate-affiliation hat to handle the PR and discuss it with the other maintainers.


> To be clear, what I meant is that projects with open stewardship (which I believe the Chromium Project is? unclear from the project website)

Your belief is wrong.


They aren't huge, but there is a market for these embedded browsers. Ekioh has been around for 15 years.


And they haven't undergone huge growth to make Flow possible, so I don't think there's any reason to believe they'll be less viable now than they have historically been.


Doesn't WebKit already have a lot of GPU and multi-thread / multi-process acceleration?

Even Gecko does now.


> Even Gecko does now.

In matters of GPU and multithreading, it’s not a case of “even Gecko”—in these fields, Firefox is leading the pack among desktop browsers by a very considerable margin. Firefox is the only one with actual GPU rendering (via WebRender), and I haven’t heard of any competitors even starting on doing the same, which will take them years. As for advantageous use of multithreading, Firefox has led the way here with other parts of the Quantum project (again driven by stuff that was incubated in Servo), though I have a feeling Chromium has also been steadily doing more too; I think parts of LayoutNG might be multithreaded?


Blink and V8 also use GPU and multiple threads.


There are bunch of use cases listed here: https://www.ekioh.com/solutions/


Maybe someone benevolent could buy it. Probably not Mozilla, but what about a big FOSS outfit like GNU or Apache or Raspberry Pi. I think I'd even be willing to break my embargo on crowdfunding to promote such a crucial bit of the web.

Firefox is great, but I don't want to put all my chickens in that basket. WebKit is alright but has many drawbacks documented elsewhere. Any other engines are too broken on the "modern web" that society demands we use. A healthier ecosystem will less dependence on Google would be good for just about everyone.


None of those (GNU, Apache, rPi) ever bought anything, they get projects donated to them, but even then they lack any real resources to invest into such huge projects.


And for a while Apache was where projects went to die.

Thankfully, there are more active and growing projects now.


Acquisition seems natural, because this would be a great component in a Pi-focused set-top box experience, competing with RokuOS.


*Closed source browser.

Flow is yet another project pulling on open source resources, and the browser market created from open browsers, and trying to privatize it. Imagine if google could just make whatever internal changes to chromium and nobody knew about it.


> "yet another project pulling on open source resources"

Erm, yeah. Isn't that the point of making something open source with a permissive license? So that other people are able to use it in their own projects?


I feel like a lot of people choose permissive licenses due to inertia while not really understanding them.


Agreed - I feel like 15 or 20 years ago, GPL was a well-known license so that's just what people went with, without understanding all of the implications.


Legally for sure. But from an ecosystem perspective, it's reasonable to be concerned about people whose relationships to a commons is essentially extractive. When I use open source in commercial offerings, I see it as both morally appropriate and good business to contribute back in one way or another.


Why not use GPL/AGPL/LGPL as it's clearly more appropriate in this case?


From my perspective as an open-source consumer, that doesn't matter much. The license tells me what I can legally get away with. But what I care about pragmatically is whether that project will keep existing and improving such that it will meet my needs down the road. And what I care about morally is maintaining positive-sum relationships with my community and society.


> yet another project pulling on open source resources

And? It's a product people find useful enough to pay for.

Open source is by default provided to the world freely. If someone is opposed to it being use freely, in any way the person on the other side chooses, including commercially, there are licenses that can proscribe that.


> Imagine if google could just make whatever internal changes to chromium and nobody knew about it.

If you use chrome, they do. Chrome has a lot of changes and additions that are not in the open-source chromium codebase.


Imagine the irony of fate if they not only did that, but also had the audacity to call the resultant browser "Chrome." Makes me shudder.


One side effect, intentional or not, that came from Google naming their browser "Chrome" was that Gecko development become more difficult. It became too hard to get search engines (you know the one) to understand that you weren't looking for Google blog posts and press releases, but instead developer documentation for Mozilla internals, which you could have pretty reliably found previously by including "chrome" as a useful search term.


> Imagine if google could just make whatever internal changes to chromium and nobody knew about it.

Chromium is BSD licensed so Google absolutely can just make it closed source.


The BSD license only applies to some code from what I can tell. Other code is still under LGPL. At least this is what this file tells me:

https://chromium.googlesource.com/chromium/src/+/refs/heads/...


Yes, there's code in Chromium going back to the KDE/ Konqueror and WebKit days that's LGPL licensed. Mostly in the Blink rendering engine like you linked.


Just to clarify the relevant thing is whether Google owns the copyright to Chromium source, and it does for a huge part. It can license that code however it wants, including open source, or no license at all.

Chrome/Chromium also uses a ton of open source libraries for which Google does NOT hold the copyright, but all of them, per their license, can be linked with closed source code and distributed (i.e. they are not GPL/copyleft).


...and they do, calling it Google Chrome.


Words can not express the relief it is to see a completely new web browser rendering engine being developed. It's heartbreaking that it's proprietary, my initial wish was that they'd open up after it matured a bit. Still hopeful though, the browser landscape needs diversity and preferably in the form of something that isn't carrying 3 decades worth of baggage.


It is quite reasonable for it to be proprietary as developed by a small team that still needs to see return on its investment. Last thing they need is someone with a pile of cash and/or reach taking their engine and launching a new browser with it, stealing their thunder.

Monetizing a browser is not easy if you are not in the data-monetization game.


> Monetizing a browser is not easy if you are not in the data-monetization game.

How do we know that they aren't?

It's like free money for a tech company. There's little or no disincentive, and they don't have to do a lot of extra work - the data is already sitting around waiting to be collected.


I'm refreshed that Flow does not have its implementation language on the front page.


As much as I'm irked by language partisanism, that's typically justified on the basis of ease of installation or attracting developers. Since this otherwise great looking product is a closed source binary then it would be hard to justify.

(Although I bet it's not in Rust, because then they'd surely say it anyway.)


(It's C++)


So expect to see a billion trivial security flaws that give RCE access to your computer.


I would be a lot more impressed with this browser engine if I could actually see it working.


Apparently there's a preview for Raspbian: https://support.ekioh.com/download/


It's been available for a while…

I download and try it every couple of months to see what their progress is like


> As with Gmail, I believe Flow is the only browser engine written after Google Docs that can run Google Docs

This is depressing.


I'm curious as this is a closed source project, who does it exist? what market are they trying to serve?


I used to develop Set Top Box user interfaces that used the Ekioh browser. We'd use SVG, JavaScript and CSS to get native UI performance in a browser that was running in 256MB ram. Their later versions enabled us to use HTML5 and CSS3. I enjoyed the challenges of balancing UI animation against capturing keypresses and other events as well as executing JavaScript fetched via AJAX. Due to the memory constraints we couldn't use any JavaScript frameworks and wrote everything in vanilla JS.

Our C and C++ developers would expose the native hardware functionality up into the JavaScript Ekioh engine so that we could access and control features like scanning cable TV frequencies and recording to disk.

The cable and satellite TV networks would end up paying a licence per instance of the browser running on each of their customers set top boxes.


Curious what does your question imply? DuckDuckGo, Google, Safari, Instagram are all examples of closed source software and have no problems reaching users/customers.


All the examples you gave are end user applications. Closed source software meant for developers/professionals has a risk ranging from mild inconvinience like not being able to customize something to severe problems like not being able to get any work done today because the license servers are down.


Aren't developers/professionals end users of DuckDuckGo or Safari too?


Anyone who wants to run a high performance browser on minimal hardware. Eg browser on TV set top box, embedded point of sale device, etc.


Ok I'll ask seeing as it's never defined in the article: what is Flow and why should I care?


As I understand the sales pitch for Flow, it's a browser that can run on low power devices and remain responsive and useful.

Never used it myself, it was one of the possibilities we looked into recently for a set-top-box thing that would render most UI with a browser.


How can I test this browser to make sure my web sites works on it !?


Web does not want another closed browser.


Yes but what's a 'clean-room browser'? The article doesn't explain what that term means.



Very offtopic, but the name of the author - Piers Wombwell - would make a great name for an Ob/Gyn.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: