The article mentions that Food Link and Giant (both part of Ahold Delhaize USA) are two supermarkets that responded to the ACLU survey saying that they do not use face recognition. I searched around and found the more complete list of companies' responses: https://www.aclu.org/blog/privacy-technology/surveillance-te... (near the bottom). The tl;dr though is that most companies refused to answer.
This is also a good point. There's the related problem of the extension author selling the extension to a shady company for some extra bucks, who then pushes out an auto-installing update bundling it with malware/adware.
> Go programs currently compile to one WebAssembly module that includes the Go runtime for goroutine scheduling, garbage collection, maps, etc. As a result, the resulting size is at minimum around 2 MB, or 500 KB compressed.
The minimum size is a bit unfortunate, but after all it is still just experimental.
I imagine it would be possible to do some dead code elimination in the future to get the size down. Still, 500KB is roughly the size of a lot of the real world apps[0]. I'd imagine an equivalent app, written in Go would be around double, given this baseline number. Double isn't soo bad, but it would be nice to get this number down.
Dead code elimination can happen in many places. In Go, for instance, an obvious one would be to eliminate most type info for types which will never be queried for introspection.
The minimum is large, but if you compare it to mainstream js frameworks, it doesn't look that large. (Source: https://gist.github.com/Restuta/cda69e50a853aa64912d). There are several frameworks with size over 100K compressed.
I would say being 20x larger than the largest JS framework is pretty significant. There's a lot of use-cases that are simply not plausible if you need to wait for 2mb to download.
Maybe they can use some DCE to get it smaller, idk. Go binaries have never been that small so I have my doubts, but hopefully!
EDIT: I see the release notes now that says 500k compressed. That's much better. I didn't realize webassembly could be compressed, is this just gzip?
Yeah, not sure if you caught this before my edit, but I didn't see the release notes that said 500k at first. I didn't realize that webassembly could be gzipped and get that much savings.
But that's the minimum, i.e. a Hello World. A Go framework that actually has the same level of functionality would surely be much larger. And besides, JS libraries of that size are a bad thing - JS developers have spent a long time working on things like code splitting to get page load times as low as possible. It'll be a huge shame if we throw all that out with WebAssembly.
I mean, this is a limitation of Go. With JS, the "runtime" is in your browser, you've got a beautiful JS engine already. With Go, it would need that runtime bundled. Similar to shipping the JVM with your JAR.
This isn't a limitation we'd see with something like Rust.
You'd add size when you include whatever DOM abstraction you wind up going with, but at the same time you wouldn't need to add much else because the Go stdlib is pretty complete. On the other hand JS developers are adding Lodash and other utility things that Go already comes with.
Well these days people are using tree-shaking and ES6 imports to only include the lodash functions they actually need.
My point isn't that Go should be banned from WASM or something, but that "5x increase in library size for zero functionality sounds fine" is a disappointing view to see given how hard the JS community has worked on bundle sizes.
You are comparing some MVC frameworks that brings a lot of functionalities with basically a runtime that does nothing on its own. Compare functionalities AND weight instead, otherwise it makes little sense.
Would it be possible to make use of the built in JavaScript garbage collector (I'm assuming not) and/or separate to the Go runtime into some kind of shared linked lib so that at least you only pay the download cost once?
(Of course that would open up all the sorts of issues that static compiles get rid of.)
I imagine that go to wasm would likely be used for situations where size won't matter too much and will be an acceptable trade off for familiarity or code reuse.
Wow, I didn't know that they just shut down this month. I definitely still have the add-on installed in all my browsers, and thought it was still syncing...
You still see these "trackless trolleys" in Boston too where some of the old streetcar and trolley lines used to be. (Unfortunately, most of those streetcars were replaced with diesel busses instead)
I think it would be useful to implement some security against this at the registrar level (until a better fix is more broadly available). For example, if I'm registering "epic.com" (the ASCII version), the registrar could suggest that I also register "epic.com" (the Cyrillic version), or vice versa. This could at least help site owners avoid phishing attacks on their own domains.
Unfortunately, this would require all the big registrars to be on board for it to actually be effective.
In order to prevent anything, you would need to register every combination of latin and cyrilic. For a short domain like "epic" this constitutes 16 domains. For a 7 character domain it would be 128 domains. In either case it would be a heavy multiplier on the base cost of the domain.
If you remind them there would be an increase in sales - especially if they point out the danger and then upsell the ASCII version - then they'd like implemenet it; at least they should A/B test it. Not great - but that's how I see them doing it.
The point of standards for browsers is that you shouldn't need a polyfill just to support a feature (localStorage) in one browser. Ideally, it would just destroy its contents after your private browsing session is done, just like the way (I believe) all browsers treat cookies in private browsing mode.
Sure, what I'm saying is more that localStorage doesn't have the standards equivalent of an SLA guaranteeing it'll work for the "within the lifecycle of one page" use-case. Apple, I think, are just being opinionated (and pushy) here: they think the right way to tell a web-app "your localStorage won't save" is to undefine the API, making trying to write to or read from localStorage raise an exception, so that your app is then forced to decide whether it can go on without localStorage (by handling the exception, or by using a polyfill which does the same) or that it can't (by just telling the user "don't use this page in Private Browsing mode, doofus!" and stalling out.)
It's certainly not the approach everyone would be happy with, but it allows developers much more flexibility than the contrary case, where private browsing is a silent effect that might make apps do very stupid things (like, say, downloading the same huge asset bundle into localStorage over and over.)
> downloading the same huge asset bundle into localStorage
This would happen anyway no matter what storage or cache is used since all data is cleared after private browsing session is over.
Breaking localstorage (with an over-quota error) is not the way to deal with this. Polyfills are just more crap in complexity and downloaded bytes to compensate for a browser's issues.
It would be much better to have a navigator.isPrivate flag enabled so apps can check the environment accurately, not guess based on whether certain APIs work or not. It's not about SLAs but supporting standards. Deleting client-side data is all that private browsing needs to do.
Probably not normal people using cards, but it wouldn't be hard to train cashiers/managers what to look for. However, this would probably just lead to shimmers made out of clear plastic
I feel like, instead of MITM'ing all TLS connections, antivirus companies could implement this same thing in a browser extension. If good ad blockers can prevent requests for ads from being completed, an antivirus extension should be able to do something similar, without having to tamper with the TLS connection between the browser and the site.
That being said, users would probably be much safer if they skipped the antivirus and just installed a decent ad blocker.
At least with Chrome, the extension API doesn't allow you to "peek" into the content. You do have the ability to see the url before it's fetched[1], and block the fetch/redirect. But you can't see the data until it's too late.