It means that Amazon is responsible for recalling faulty products, not that they're responsible for handling products recalled by the manufacture.
In general, the seller is responsible for the products they sell, regardless of who manufactured it. This gives the buyer a clear party they can sue, likely a party that has a presence in their jurisdiction. Amazon can turn around and go after the manufacture, but that's not the buyer's problem.
Amazon tried to call themselves a "marketplace" to avoid this liability, but nobody's buying that.
This works by adding noise. Can't an attacker bypass it by boosting the signal? Assuming the attacker can create sybil advertisers/browsers, this should be totally doable:
1. Define some baseline set of M impressions with various ad identifiers and from various sybil advertisers.
2. For each target user, define some set of M marker impressions, also with various ad identifiers and from various and sybil advertisers.
3. Save all impressions (marker + baseline) on a bunch of sybil browsers to get above the reporting baseline with some probability.
4. If/when a target user visits a target website, request a conversion report for each ad/advertiser.
You now have a baseline signal (from the baseline ads/advertisers) and a marker signal (from the marker ads/advertisers). If this is one of your target users, you'd expect their "marker" signal signal to be stronger than the baseline.
I assume it’s the MPC part that would need the Sybil protection?
Also, another assumption, but it’s that doc still builds upon the W3C proposal - would it not be worth raising as an issue in the repo? Seems to still be active.
Hi Stebalien. My name is Esse and I just developed a tool that helps organize my bookmarks. I would appreciate it if you gave it a spin and maybe gave me a recommendation. Thank you.
https://quickbookmarks.site/
It's a stop-gap because people want DMs and implementing them correctly (decentralized, e2e encrypted, etc.) is non-trivial. Rushing e2e encryption is not a good idea (and no, you can't just slap on matrix/signal and call it a day).
The alternatives are to:
1. Wait a bit longer for something half-baked that appears to meet the goals (i.e., something you're going to regret but will be unable to replace).
2. Wait even longer for something perfect.
By making the protocol centralized and stupid-simple, it's also stupid-simple to replace in when everyone is done painting the perfect bikeshed.
In my experience, temporary fixes are more likely to "stick" the better they are at addressing the problem. The fact that nobody is satisfied with this fix is a good sign.
They did, which is why it seems like a relevant example to your question. They shipped centralized, and have already replaced the centralized service they shipped with a decentralized service.
> They can monetise content that didn't originate on their platform.
They have been doing it for years.
> It shifts regulators attention from them to closed platforms like X.
It doesn't. Threads is just as closed (despite integrating an open protocol), and is still subject to the same scrutiny and provisions as the rest of Meta's products.
> They can leverage their advantages e.g. ad serving, safety to push competitors into niches.
So, let me get it straight. Facebook gained so much from adopting a decentralized protocol so they will inevitably move in the same direction that:
- they will use it to remain the only centralized service?
- they will use it to do the same thing they do before (serve ads, collect user data etc.) but somehow will be absolved of regulations and scrutiny?
Facebook messenger is not completely decentralized, but it is E2E encrypted now after years of struggle with governments and UX. It's definitely possible to move centralized systems to be more decentralized.
It's an example of somebody replacing a centralized protocol with a more decentralized one. It's also one of the biggest direct messaging platforms in the world with E2E encryption.
That depends on your definition of decentralization. Because of the way most people set up their apps, almost all Matrix users and ~all Signal users are using a centralized app under this definition.
> That depends on your definition of decentralization.
Decentralization literally means "not centralized". If you have a single centralized entity serving all your messages through a set of centralized servers, it makes the setup what?
> Because of the way most people set up their apps, almost all Matrix users and ~all Signal users are using a centralized app under this definition.
Yes, they do, and it's centralized. What exactly makes you think otherwise?
e2e encryption is a net loss for a lot of use cases. Particularly, most DMs are spam in my experience.
Spam prevention is much harder if the server can't see the message. Spam reporting can be done with sufficient effort, but stopping the known spam from reaching the user in the first place is impossible (the closest you can get is a client-side scan before actually showing the message to the user, which requires downloading the whole message just to show "number of incoming messages" indicator or else having the indicator lie).
And of course, E2EE is a lie if you're visiting a website anyway.
It is my understanding that many E2EE chat systems won't actually E2EE your initial message to someone you aren't already mutual in-app contacts with.
Either E2EE is something you "upgrade" an existing conversation into (only after both sides consent to the conversation); or E2EE is something that only inherently establishes once both sides have sent one-another a message; or E2EE is something you can only enable before you start a conversation, if you already have the other person's public key (which you only get when you request to add them as a contact, and they accept.)
I think schemes like this balance privacy with spam-prevention quite well: privacy-conscious people can explicitly add each-other before either person says anything / can send intentional small-talk as pairing messages; while everyone else gets the benefit of a central spam-filter sitting between them and messages from strangers.
Except they'll never replace it because they'll be too busy making some other feature stupid simple by centralizing it and we'll be back to centralized social media.
This is an issue with any software that tries to maintain backwards compatibility, not Linux. Windows has:
- Many years worth of different control panels.
- Little consistency with respect to toolkits in general.
- Fractional scaling issues in applications using older toolkits (e.g., open up the policy editor and notice the blurry fonts). Microsoft is actually giving up here and has been experimenting with ML-based scaling for old applications (an approach I expect we'll eventually see in Linux as well).
Apple handles this by breaking compatibility every so often, forcing old software out of the picture.
Window's backward compatibility is way better though. There are plenty of GNOME extensions and applications that just don't work anymore under modern GNOME Wayland.
Example: all the redshift applications and extensions for lowering the screen brightness.
It's seemed a little odd to me, particularly given Linus' attitude of "never break userspace", that so many high-profile projects decide to break backwards compatibility almost for the sake of it (particularly GTK/GNOME).
I notice that even the blurry scaling on Windows looks better than what we have on Linux. It seems that they have some special algorithm for that. Anyone who knows how it is implemented can chime in here?
My FW13 AMD laptop (61Wh battery) can last 11hr+, technically. If I'm doing anything other than light web browsing, that quickly drops to 8hr. If I'm watching videos, it's more like 5hr.
Unfortunately, at least on Linux, it requires quite a bit of tuning for the moment. But there are some pretty good guides.
Suspend battery life still isn't great, but it's _much_ better (with s2idle supported) on the latest-gen AMD platform.
I previously had the 11th gen Intel and... I got much better battery life than you, but it was still pretty bad.
This is really interesting to me. I too have an 11th gen Intel machine running Arch, and while I get better battery life than 2 hours, it's still the weakest part of the system, and I very rarely put it to sleep, I just turn the whole machine off. Someday I was planning on upgrading to the AMD motherboard, but didn't really see a reason to do so yet, but this might accelerate my plans.
Yeah, sleep on the 11th gen is basically worthless. But the battery upgrade (especially after a few years of wear and tear) and the new AMD board are worth it.