I've just installed it, and I'm impressed so far. I've only run it against some sample German Wikipedia articles (https://de.wikipedia.org/wiki/Clan_of_Xymox), but it produces surprisingly readable text. I also particularly like the "highlight potential errors" option to flag stuff that even the translation service thinks might be a bit off.
It's not nearly as speedy as Google Translate, but I'll take that happily if it means keeping it local.
My experience with occasional use over the past four months or so is that newspapers, business documents, and other formal documents often appear to have essentially perfect translation, at least in terms of sounding plausable without understanding the language translated. For more casual stuff like twitter posts or song lyrics it makes plenty of obvious errors, although of course some of that stuff makes little sense as written. It seems to have different issues than Google Translate, although not much if any worse overall from my limited experience.
It sounds like after Bergamot funding ended there were some communication issues and the Translate Locally group that was working with the Firefox Translate group stopped working together and now have their own extension, as mentioned in another comment:
I don't think they have a browser extension unfortunately but it is an entirely rule based translation rather than AI models. I haven't tried this one yet either but hope to soon. I did try their web interface a few times:
It seems to work brilliantly with Russian. Less than 5 seconds to translate a medium length wikipedia article on my computer. This exceeds my hopes and expectations.
I used to use Android Chrome when overseas because auto translating of web pages is indispensable. It was a major concession for me, because I loath Google preventing ad blocking on Android Chrome. Still, it was less than perfect - constant popups asking you about the quality of the translation. The delay in sending it to Google wasn't noticeable when you had a good connection, but on the edge of mobile coverage it made web browsing even more miserable.
Then this came along. All those nits are gone. Personally I find the translations easier to understand than Google's. (When you're overseas easier to understand trumps grammaticality perfect every time.) I'm not a fan of the banner at the top - the could move it to a tool bar icon like Ublock Origin does, but apart from that - it's damned good.
Now we need a replacement for Google Lens. For all it's flaws, Lens seems near magical to me.
To be fair, Wikipedia basically appears in its entirety in the training data. It’s a good test to see whether the translation model and all the plumbing works well, but not whether the model generalises well.
Honest question: there's plenty of articles on wikipedia where different language versions of a page are vastly different (it feels like the majority in my experience, but that's no proof of course), how would that be useful as training data unless heavily curated?
The datasets these models are trained on are sentence pairs. So even if just a couple of sentences between two wikipedia sites are direct translations of each other, they will have appeared in the training set. They don’t have to have appeared on the same topic page, it could be that English Wikipedia has a whole category for a topic while Estonian Wikipedia has just a long single page, direct translations will still be identified and used in training.
I also think that the domain and the type of language used on Wikipedia is pretty consistent which will help a lot with unseen sentences.
By no means are these models bad! It’s just that Wikipedia is a particularly easy test for them.
How are these identified? Are they human curated? If not it seems like you need a translator to decide if they are equivalent sentence pairs to build your translator.
You're pretty much right on the money. For ParaCrawl[1] (which I worked on) we used fast machine translation systems that were "good enough" to translate one side of each pair to the language of the other, see whether they'd match sufficiently, and then deal with all the false positives through various filtering methods. Other datasets I know of use multilingual sentence embeddings, like LASER[2], to compute the distance between two sentences.
Both of these methods have a bootstrapping problem, but at this point in the MT for many languages we have enough data to get started. Previous iterations of ParaCrawl used things like document structure and overlap of named entities among sentences to identify matching pairs. But this is much less robust. I don't know how they solve this problem today for low-resource languages.
> called Project Bergamot. The ultimate goal of this consortium was to build a set of neural machine translation tools that would enable Mozilla to develop a website translation add-on that operates locally, i.e. the engines, language models and in-page translation algorithms would need to reside and be executed entirely in the user’s computer, so none of the data would be sent to the cloud, making it entirely private.
> In addition to that, two novel features needed to be introduced. The first was translation of forms, to allow users to input text in their own language that is dynamically translated on-the-fly to the page’s language. The second feature was quality estimation of the translations where low confidence translations should be automatically highlighted on the page, in order to notify the user of potential errors.
Still impressive. It works pretty well and without that cloud that Google likes to tell us we really need.
Google is a bit better of course with many common expressions but I'm sure that can run locally too if they'd want to. Mozilla just has don't catching up to do because they don't monetize our data. So less budget to work with.
> Still impressive. It works pretty well and without that cloud that Google likes to tell us we really need.
This is still using Google's cloud to host the models and your browser has to repeatedly download them on demand. We shouldn't need to depend on Google at all, but with Firefox Translations we still do and they're still collecting data about us.
I think this comment is the prime example of Firefox being unable to do an objectively and unqualified Good Thing without a million people showering hate into the comments.
It's not just that I have high expectations of firefox, they claim to have high expectations of themselves. They heavily market themselves as being privacy friendly and often they have been, but they aren't always.
In this case, I agree that this is, largely, a "Good Thing" although not unqualified since some number of users who wouldn't have otherwise will end up repeatedly sending data to Google, probably without even being aware of it. The data they'd give up is (to me at least) small compared to the data they would have been surrendering to online translation services, but that's not really the point.
It just don't understand how they stared from "Protect your privacy from sites like translate.google.com by using this add-on to translate webpages locally!" and ended up at "Let's make firefox users connect to Google's servers every time they use this feature!" If you're creating a product designed for people concerned about their privacy, it should beyond obvious that making your users send data to Google is a problem.
It's not like they couldn't host those files themselves at mozilla.org or (as others have pointed out) just keep them locally and avoid making a bunch of unnecessary connections to a remote host entirely. If they'd done that it would also allow Firefox Translations to work when you aren't connected to the internet.
It's really not hate though. It's love and concern. I love Firefox, and I want it to do better!
>It just don't understand how they stared from "Protect your privacy from sites like translate.google.com by using this add-on to translate webpages locally!" and ended up at "Let's make firefox users connect to Google's servers every time they use this feature!" If you're creating a product designed for people concerned about their privacy, it should beyond obvious that making your users send data to Google is a problem.
Don't you think that except for the PII data which shouldn't be used for training at all those (training) datasets can be stored at any place and it does not make a difference from the privacy point of view? Or I wrongly interpret their purpose...
Models are downloaded only once and then cached, and not repeatedly like the OP mentioned. Source: Me. I've developed it. If you disagree, are seeing a different behavior or have further questions, please reach out in the repo: github.com/mozilla/firefox-translations/
Thanks once for the response, and eleventy times for actually developing a non-cloud translation thingy. As for the caching thing I was really hoping this was the case so I guess that makes it three.
Good to know! I still hope you can find a better place to host the files, but it's nice knowing the problem only happens once per file (so long as the cache remains anyway)
Yeah they should just use another cloud to serve the files. Using your main competitor is really disingenous, because they can glance all kinds of usage data from it (if not more)
I'm not sure why this is done because this kind of filehosting is easily replaced by something more privacy-friendly.
We retrain models as we get new datasets and only if they improve, which is not common. So far we haven't updated any model. When it's time, then yes, they will be updated, but it's definitely not a frequent process.
> But does Google upload what you translate later?
If you use Google Translate, of course it does because everything is done on their servers
> Would be cool to have Firefox Translations integrated into TOR.
Tor Browser is just a forked firefox so this should not be too difficult. I believe they disable addons by default because they can leak data and they can't check all addons for this. Not sure if you can switch it back on though. I suppose they could validate this one as it's so important. I would recommend submitting a feature request to the tor project.
>> But does Google upload what you translate later?
> If you use Google Translate, of course it does because everything is done on their servers
As mentioned by GGP, the Google Translate app for Android (at least) allows you to download the model for a given language (pair?), after which you no longer need any kind of Internet connection to translate. That implies everything is done locally, not on Google’s servers. GP’s question was whether the app will still save your queries and submit them once a connection becomes available just to scratch that data collection itch.
> GP’s question was whether the app will still save your queries and submit them once a connection becomes available just to scratch that data collection itch.
disclaimer: googler
This can be tested. Translate shows up in your Google 'My Activity' page, so you can do some offline translations, then switch the network back on, and see if the translations show up in My Activity. Assuming you can trust the My Activity page to be complete and accurate (my opinion is you can, but i would say that)
and FTR: I've actually just tried it and offline translations do not show up in my activity so I highly doubt they're being surreptitiously uploaded.
>As mentioned by GGP, the Google Translate app for Android (at least) allows you to download the model for a given language (pair?), after which you no longer need any kind of Internet connection to translate.
This isn't true. Google claims this, but it just doesn't work that way: I've had many, many cases of trying to translate stuff with a bad cellular data connection and it doesn't work, even though I have the language pack downloaded.
I don't think offline translation kicks in automatically when you have a bad (as opposed to no) connection. You can easily verify that it can translate without any connection (both on iOS and Android) by downloading the language and putting the phone in airplane mode. (At least, the basic text translation works fine. The more advanced features, such as speech and image translation, don't.)
Also, Microsoft's Translator app can do the same (offline translation for text) and IME is about on par with Google).
>Also, Microsoft's Translator app can do the same (offline translation for text) and IME is about on par with Google)
Interesting, I'll have to try this.
Well, I tried installing the app and using image translate mode on some Japanese and the results were not very good, not nearly as good as Google Translate. I'll try it out later with regular text.
I also looked at the phrasebook feature. That's a pretty neat idea actually. However, for some really strange reason it defaulted to showing me phrases in Spanish. I have no idea why it thinks I would want to speak Spanish (My system language is English, and I live in Japan, so obviously I want to convert to Japanese. No one speaks Spanish here.)
> using image translate mode on some Japanese and the results were not very good,
I think the honest truth is that Japanese is the ultimate challenge of any translation too.
My Japanese friends tell me that DeepL is about as close as you will ever get to a passable translation quality.
But DeepL does not do image translation.
On a recent trip to Japan I installed six image translation apps on my phone.
None were perfect, I found Naver Papago to be the most consistently usable (although it was far from perfect).
Interesting observations I made during the extensive testing:
1) The majority of image translation apps don't like Japanese when written vertically, I found they perform best with horizontally written Japanese.
2) All image translation apps *REALLY* don't like hand-written Japanese. Some of them *MIGHT* translate *SOME* of the text. But really all of them only really work consistently with machine-printed text.
The other issue with deepl is that it has limited language pairs. I wonder what limits it. The language I'd like should have enough of a corpus of text.
That’s just bad programming. Turn on Airplane Mode and it will work. A bunch of apps won’t even try to use offline data when they’re “online”, even if the connection is 1 byte/second.
It’s not bad programming if the server has a bigger better model, thus gives better results, and the local model is just a lower quality but smaller portable model.
That said, let my give my HN 2c and say that Google Translate is pretty bad these days. It’s community/user adjustments, for example, are guaranteed to be bad. In Spanish, you instantly know you’re looking at a user “correction” because the translation has no accents. “como estas”. It’s bad in 100% of cases, every time I see that “user verified” symbol.
I think the offline model doesn’t have the user adjustments, but the offline model also seems to be lower quality. Back when I translated a lot, I used to know when my internet was offline mid session because of the difference in translation quality.
So I ask for a translation and it fails because it times out, giving me an error. And you call that good programming?
I get it that the server translations are better, but currently I’m not seeing any translation at all. You, Google Translate developer, should catch the error and show the offline translation instead.
Oh, I see. By “doesn’t work” I thought they (and you) just meant it still hits the server even though you have a model downloaded.
Yeah, on a spotty mobile connection, most services tend to be optimistic that it’s better to wait than to assume your internet is down. iOS online/offline callback is very optimistic, probably because for most services, trying something in a degraded 20b/s conn is better than giving up and going “sorry, no internet.” (Funnily enough, the iOS App Store gives up way too soon)
So I agree. I think the right thing to do is to do an instant translation with the local model, when available. Maybe a cherry on top is to see if the server has a better translation in the background.
I think doing machine translation in the cloud makes more sense commercially (easier to monetise) and is a lot less challenging (better hardware, model sizes less of an issue, fewer worries about exposing proprietary code or models).
But I agree that this local experience should be the future. It’s good to have the control and independence.
I wish I had your optimism. I suspect the opposite though.
A better user experience was never the goal of cloud services. Providing translation and other services in the cloud gives companies massive amounts of data about you that they can leverage to their advantage and gives them opportunities to control and shape what we're allowed to do or see. As long as that stays true there will always be a push for users to give up more control and become more dependent on third parties.
I want to live in a future where more things are done locally and independently, but things are headed in the other direction and there's a lot of money and power behind preventing the pendulum from swinging back. I'll do what I can to fight the trend though, and if this add-on really works as advertised I'll gladly use it.
I thought it was something like this at first but no. It suddenly happened around the update to 107 and happens for every site and container. Maybe a bug?
Ps: I'm on FreeBSD for which mozilla doesn't make a version so I'm using a community release. It's been working totally fine though for years with the exception of D
Hardware DRM which is just not supported on FreeBSD (and I agree with that)
Sounds like a bug, I'd try it in a fresh profile and see it still behaves that way for you. I just installed the extension a few minutes ago and it's translating manually when I click the button.
I've been using this for a few months and I love it. I will take this opportunity to gripe though and say that it doesn't handle DOM changes, e.g. if you're going through a form and you click an expander, the contents of that expander won't be translated if it was dynamically inserted into the DOM. This is more or less any AJAX request, or React/Vue component with its own data calls, etc. As a result, I often find myself falling back to Chrome if I need translations (recently moved abroad).
Also, it would be great if the models were cached. If I don't click "translate this tab automatically as I browse" I'll download the whole model on every page load. Not great if you're tethered to your phone :(
That must be a bug. There is a lot of code in the extension specifically to make it track DOM changes and push translations for those as they happen.
React pages are especially tricky to translate as the framework does not take kindly on external JavaScript swapping out text nodes. There’s code in the extension to work around that.
I use Librewolf as my main browser, instead of Firefox. I understand that Librewolf is a fork of FF, so it's not necessarily 100% API compatible with FF addons. But especially without network access, I wonder why Firefox Translate won't work in Librewolf. Maybe Librewolf disables some ML/AI features?
It's a shame; I'd like to use FF Translate instead of Google Translate. But Mozilla's telemetry and frequent style changes that break my Userchrome.css styling are a bit of a dealbreaker.
It might be because Firefox Translate hooks into some private extension apis (for browser chrome and telemetry) that you can only use when your extension is signed by Mozilla itself. Maybe those apis have been removed from Librewolf, or it doesn’t accept Mozilla’s certificate as an exception.
I maintain a forked version of the Firefox extension[^1] that doesn’t use any private apis (or telemetry). It is slightly different[^2], but uses the same libraries.
Disclosure: I work for one of the project partners and contributed to the Mozilla extension.
How doable would it be for devs to integrate local translations in their websites and webapps? Do you think we'd ever get to a point where we can include a single <script> tag and have access to this stuff?
We can already. Almost. The translator works perfectly* as just a web page[1]. It is only a matter of combining that bit with the full-page translation code[2] and some UI to toggle it.
It is a bit of a question whether this is the way to go. You're downloading about 20mb to get all the plumbing + translation models necessary to translate a page. It would be okay if it were widespread enough that we can assume everyone already has these in their browser cache, but the trend is moving away from that model of caching[3].
* given your browser support wasm SIMD. So an x86 processor with SSE4.1, although M1 also seems to work. And no Safari, because they haven't implemented wasm SIMD[4].
This is really encouraging to know. I was thinking more along the line of electron apps and other local-first pwas rather than normal webpages where an 20MB increase is relatively insignificant compared to the benefits it provides (privacy, robustness, no third party apis, etc).
Taking a look at the domain names it appears that would also break updating addons not just the browser updates, so all preferred addons would have to be mirrored and installed locally.
Telemetry can be disabled on: about:addons -> Firefox Translations -> Preferences and then disabling "Report high-level user interaction" and "Report errors"
I'm also using Librewolf, and I have no issues using Firefox translate. I must have changed several settings over time, including about:config flags, but apparently it definitely is possible.
Apart from that, I assume it's quite difficult for anyone to help you based on the error description "doesn't work".
Sorry, you're totally correct that "doesn't work" is a poor description of the problem. Unfortunately the failure case doesn't really give me any feedback; no matter what I do, the "Firefox Translations" button in the URL bar remains grey and doesn't produce a dialog. Clicking on the toolbar Firefox Translations button yields empty language dropdowns and a seemingly-neverending "Loading Translation Engine" message. I don't think I have anything special enabled in my about:config, maybe a few small changes, and this issue has persisted since roughly Librewolf 100.
FWIW, the "TranslateLocally for Firefox" add-on recommended by another user here works great for me.
Telemetry can be disabled on: about:addons -> Firefox Translations -> Preferences and then disabling "Report high-level user interaction" and "Report errors"
I feel like it won't matter. FF translation usage statistics will be a small fraction of Google's, and I can't imagine them even being interested at that kind of data. Since it's an add on (vs. Chrome auto translation by default), it probably won't be adopted by much of the browser userbase.
This is a good start, but it's worth noting that this still sends data to Google (they host the models) and this doesn't work offline. If they can fix those two things I think this is great.
Recent version also added a toolbar button with a box you can just copy text into and get a translation, which is awesome.
The first release translated endless twitter scroll inline perfectly. Unfortunately, this stopped working for me - either a Firefox update or a translations update - so I actually have to use this box often.
I'm a long-time firefox user and very vocal hater. That said, I've had less and less to hate with each update. I used to hate ff mobile(too slow). I used to hate the browser on Linux(no Wayland, no hardware video, weak multi-process). But all that has been fixed. Plus, cookie isolation, auto cookie banner management, HTTPS preferred, and mobile extensions. Not to mention the promise to support content filtering extensions.
Add to that local translation! FF is the best it has ever been and somehow it is losing user share faster than ever. I used to get all the hate, but now at days... I'm just not sure. I'd love to know why.
> Add to that local translation! FF is the best it has ever been and somehow it is losing user share faster than ever. I used to get all the hate, but now at days... I'm just not sure. I'd love to know why.
I'm also a long time fan of firefox, but it does seem like there's always something I have to disable or fix with every major update.
Firefox has always claimed to be very pro-privacy, but they don't always live up to that with their actions (From a privacy perspective perhaps the biggest slap in the face was Pocket). Even here "local translation" means that Firefox needs to repeatedly ping Google's servers in order to download the models. Still better than letting Google translate the contents of a page, but far from ideal from a privacy standpoint.
As it stands I have modifications to over 100 settings in about:config that have to be made with each fresh install of firefox to get it locked down properly. Firefox is still the best browser out there because it gives you the ability to disable all that stuff, but its still a pain.
User share is always going to suffer because for most people their phones come with something else already, their PCs and tablets do too, and they don't know any better. They don't see or understand the problems with using chrome so why look for a browser that gives them better solutions?
I use firefox because I care about my security and privacy. If I didn't care about that I'd use Chrome or whatever Microsoft is shipping these days too because I know sites will make sure their stuff works in those browsers. All the hate I have for firefox comes from a place of love. I care about firefox. I depend on it. I want firefox to be better. Often that means I'm calling them out on their bullshit, bitching about the extra work they're giving me after updates, or just mentioning how they could have done better.
Firefox users complain about firefox because if they didn't care enough to complain they'd just be using chrome.
Exactly this. Love it so complain about it. {But only to those who understand.)
Plus we (?) Have multiple versions of Firefox installed so we can use one with extensions, one with save page to pdf, one nightly, etc etc.
It's too long to paste as a comment here, but I started by finding every entry with a URL and deleting the URL (leaving it blank) for basically any domain that wasn't mozilla.tld (careful here though) and then I used the recommended changes from arkenfox, ghacks, the TOR browser and https://support.mozilla.org/en-US/kb/how-stop-firefox-making... and then tweak from there.
I disable things like service workers, normandy, shield, pocket, the new tab page, WebRTC, searching from the address bar, prefetch, WebGL, push notifications, WebAssembly, and lots of dom. options
You can backup your user.js but updates tend to add/rename/remove preferences so that gets ugly fast. I've also seen firefox reset prefs I've set but didn't lock. I've been meaning to automate monitoring the file for changes and notifying me for lines that were added or modified, but I haven't gotten around to it yet.
https://getpocket.com/en/privacy/ looks like a pretty standard ad-tracking privacy policy to me. The service definitely doesn't meet the high privacy bar set by other Firefox features like end-to-end encrypted sync.
I personally think on-by-default telemetry and "studies" are the bigger slap in the face, but there's a case to be made that Pocket is worse.
> I can assure you, you have no idea what you’re talking about. Like demonstrably, not a single valid argument. Like actively promoting misinformation no argument.
Would you perhaps care to substantiate any of that, or just stick with the personal attacks?
In addition to the information that you provide to us when you register for a user account, we collect information about the URLs, titles and content of the web pages and other information you save to Pocket. The types of information we collect includes your browser type, device type, time zone, language, and other information related to the manner in which you access the Pocket Technologies. If you are on a mobile device, we collect the advertising identifiers provided by Apple on iOS and by Google on Android. You can change this identifier in your device settings. We also collect information about your use of the Pocket Technologies so that we can provide our services. For example, as a part of providing Pocket’s syncing features, we sync information about the items that you save and view within Pocket so that your list, tags, scroll position, and other account and usage information may be synced across all of your devices.
well as an individual data point, I recently threw in the hat because both my regional bank's website and my health insurance page stopped working at all on Firefox. Not really something I can do without and it seems to be getting worse for companies with smaller web presences.
I harrass support in cases like this. Like, are your web developer people completely incompetent?! It's not like making things work in FF takes some real big effort, just do everything properly. (We develop a lot of web products, so this isn't some baseless statement)
Did you report those sites to https://webcompat.com ?
Also, did you try disabling Enhanced Tracking Protection for those sites? (Shield icon in URLBar)
Try in a fresh profile (via about:profiles)? (Sometimes people tweak a lot of about:config settings, which is generally NOT a good idea - there are no guarantees for those.)
>Too bad there's no Japanese or Korean translation :(
It's interesting that they have Icelandic, a language spoken by 350,000 people. I'm often surprised seeing Icelandic has a full audio dub on a Disney or Netflix movies, above languages spoken by 100X or more speakers. That can't be an economic decision.
Anyone know why Icelandic punches above its own weight? Or is it because it has so few speakers that people do dubs to help preserve it?
I don’t know about Disney, but in (European centred) machine translation we often pick Icelandic and Maltese as first languages to do specifically because they’re small but familiar enough as tests for the pipeline.
Translates websites in your browser without using the cloud, a different approach to that of google, should protect privacy better. It doesn't handle mixed pages well unfortunately (so the other languages in my RSS feed are not being detected) and I can't force it to translate the pages but otherwise it seems to do a similar job.
I see both an icon on the right end of the address bar I can click to manually translate pages, and the ability to click on the addon from the extensions list in the toolbar to open a popup to translate arbitrary text.
Love it. But all the languages are pretty similar to each other, I’d argue. I want to see it working with some exotic language like Mandarin. Most translators deliver hilarious results when translating Japanese for example.
Absolutely brilliant! I hope they eventually ship this with firefox by default. I doubt I would have heard of this extension if not for this HN post; shipping it by default would probably be useful to many people.
If this is part of what Mozilla has been working on then hats off to them. Hopefully this like the iPhone a harbinger of things to come and not a Watson a last dying gasp.
I came across an add-on that leveraged the same models for translation a while back but with a very different ui/ux that I preferred https://github.com/rei2hu/berga-translator
(lets you manually upload/manage the models and perform translations in page)
A potential improvement I ran across today: a paragraph of German included in an English-language page was not translated. Didn't see a way to select it and click 'translate'.
There also doesn't seem to be a way to highlight parts that were poorly-translated and then 'report' them.
Hey Mozilla! I would really, really, love a similar add-on that can summarize web pages for me.
Many webpages write 10 minute articles that contain 30 seconds worth of information. An add on that can summarize wall-of-text comments and entire articles would be very valuable to people like me.
Would it be possible to show the original and the tranlation side by side (on desktop)? I suppose you can do this manually with two windows etc but an option to render them automatically next to each other could be a great language learning tool.
I know this is FF… but could this kind of tech exist in the chromium instance of an electron app? If so, that’d sure be a nice alternative to me having to go about translating it “the hard way.” Fwiw, my electron app is React-based.
> A CPU that supports SSE4.1 extensions is required for this addon to function properly. If it doesn't, an error will be displayed when the translation is being started.
Obviously. Replacing text is a huge CPU intensive task.
It is running a full transformer type neural network on your cpu, it needs all the speed it can get.
It’s because the translation engine requires at least SSSE3.1 instructions[1]. These are translated to wasm SIMD instructions[2] which are only enabled by browsers if the underlying hardware is there to execute these at least somewhat efficient.
This is nice for translating whole sites. As a non-native english speaker, what I often want is to translate single words. I hope this add-on can provide this feature as well in the future.
I am impressed. This works with my React app without crashing it! Google Translate crashes my React app because it modifies some the DOM in the way React cannot deal with it.
I recently compared various translators including Bing, Yandex, Baidu Fanyi and who knows what else and my result was DeepL beating everyone with their neural MT (which I don't think can be done offline), followed by pretty much tie between Google Translate and Reverso (which also use neural MT, same as DeepL and also their website is also confusingly very similar to DeepL, almost like clone).
Sometimes GT is better than DeepL, sometimes Reverso, but most of the time DeepL delivers the best translation, followed by GT, which is usually followed by Reverso.
But considering 2 of these use neural MT I can't imagine how good/bad is Firefox with offline translation.
Yeah, I guess its still a no-Go due to the botched addon support in the regular Android Firefox build ? But maybe in the Nigtly/Beta using that annoying addon list trick ?
TLDR: So Firefox is trying to cater to general audience NOW. To increase the user base with features and what not. But a noteworthy amount of them have already left Firefox cos of inconvenience and broken websites pre-quantum (v57). And they definitely are pissing their current loud and vocal "power users" with MDN layoffs and many more reasons.
I see that the main comment is about why Firefox is hated/losing market share.
Let me try and share my guess/assumption. I could be wrong. So do share any counter opinions you may have.
People really do forget that Firefox is more than just a piece of software. Firefox stands for privacy, ethics and a lot more. It's user base and the community around it is looking for something that is far more than a piece of software based on technical merit. Keep this in mind as you read the rest.
When Firefox quantum was released, I had just moved to Chrome finally. It had just been a month or so. Before quantum, Firefox was horrible, slow and bulky. Firefox lost majority of the "average" users around this point. Because many websites broke with important things like Google Meet not working.
This left Firefox with a user base of enthusiasts and privacy nerds or power users. For good chunk of the userbase who are very local. Firefox has been ignoring the community and things like MDN layoff while CEO is getting paid is not a good mark.
How community volunteers are ignored is a good thing to keep in mind. I will not go there. Research them. A lot of community volunteers have posts about it. Keep in mind that Mozilla is still spending a crazy amount and wasting money on these events while ignoring what made it good. Money that they don't have.
And now, Firefox enthusiasts are perplexed on why Firefox is not gaining any ground. We should stop looking at it from technical merit alone. Then you will have the answer.
I don't see Firefox as some great movement for freedom anymore due to current leadership. While I understand the need for a browser with a different browser engine, I am not at all invested on Firefox emotionally. A lot of the conversations I recently have with others says the same. They like me are ready to ditch Firefox any day.
That being said, let me clarify that Firefox is really good these days. But I do think it might be really late to get a noticeable amount of general audience.
It's not nearly as speedy as Google Translate, but I'll take that happily if it means keeping it local.