Hacker News new | past | comments | ask | show | jobs | submit | more stebalien's comments login

Try setting `ytdl-format=ytdl` in `mpv.conf`, that fixed it for me.


It means that Amazon is responsible for recalling faulty products, not that they're responsible for handling products recalled by the manufacture.

In general, the seller is responsible for the products they sell, regardless of who manufactured it. This gives the buyer a clear party they can sue, likely a party that has a presence in their jurisdiction. Amazon can turn around and go after the manufacture, but that's not the buyer's problem.

Amazon tried to call themselves a "marketplace" to avoid this liability, but nobody's buying that.


This works by adding noise. Can't an attacker bypass it by boosting the signal? Assuming the attacker can create sybil advertisers/browsers, this should be totally doable:

1. Define some baseline set of M impressions with various ad identifiers and from various sybil advertisers.

2. For each target user, define some set of M marker impressions, also with various ad identifiers and from various and sybil advertisers.

3. Save all impressions (marker + baseline) on a bunch of sybil browsers to get above the reporting baseline with some probability.

4. If/when a target user visits a target website, request a conversion report for each ad/advertiser.

You now have a baseline signal (from the baseline ads/advertisers) and a marker signal (from the marker ads/advertisers). If this is one of your target users, you'd expect their "marker" signal signal to be stronger than the baseline.


Technical docs are here: https://github.com/patcg-individual-drafts/ipa/blob/main/IPA...

Might be worth opening an issue if you believe there's merit to the attack?


Those docs look out of date and appear to be designed for "app" ecosystems. The latest proposal from Mozilla is https://docs.google.com/document/d/1QMHkAQ4JiuJkNcyGjAkOikPK...

And I'm now quite sure this system is insecure. Fundamentally, either:

1. There is some magical sybil protection: An attacker can only spend their own privacy budget without affecting the rest of the system.

2. The system can be saturated: An attacker can spend everyone's privacy budget.

3. The system is not private: An attacker can exceed the "safe" privacy budget by combining information from multiple sybils.


I assume it’s the MPC part that would need the Sybil protection?

Also, another assumption, but it’s that doc still builds upon the W3C proposal - would it not be worth raising as an issue in the repo? Seems to still be active.


It's all other parties, actually. I'm assuming Mozilla and friends are trusted and that the cryptography is perfect.

I've filed an issue at https://github.com/patcg-individual-drafts/ipa/issues/90 but I'm still not sure if that's the right repo.


I can try. But I'm pretty sure what they're trying to do is fundamentally impossible without some kind of sybil protection.


I'd recommend finding a note-taking/capture flow you're comfortable with, then use that for bookmarking. Org-mode, obsidian, logseq, etc.

1. You can solve the "remember random thing" problem once and for all.

2. Such tools usually have efficient capture workflows that let you capture now and organize later.

3. Note-taking applications tend to be more flexible and featureful than bookmarking applications.

The main downside is that browser integration can suffer, but you can usually find companion extensions for capturing notes at a minimum.


Hi Stebalien. My name is Esse and I just developed a tool that helps organize my bookmarks. I would appreciate it if you gave it a spin and maybe gave me a recommendation. Thank you. https://quickbookmarks.site/

My email is ovief72@gmail.com Thanks again


If you're looking for a more actively developed collaborative editing mode, take a look at https://github.com/casouri/collab-mode.


It's a stop-gap because people want DMs and implementing them correctly (decentralized, e2e encrypted, etc.) is non-trivial. Rushing e2e encryption is not a good idea (and no, you can't just slap on matrix/signal and call it a day).

The alternatives are to:

1. Wait a bit longer for something half-baked that appears to meet the goals (i.e., something you're going to regret but will be unable to replace). 2. Wait even longer for something perfect.

By making the protocol centralized and stupid-simple, it's also stupid-simple to replace in when everyone is done painting the perfect bikeshed.


> By making the protocol centralized and stupid-simple, it's also stupid-simple to replace in when everyone is done painting the perfect bikeshed.

But we all know that the more temporary the fix, the more permanent it becomes.


In my experience, temporary fixes are more likely to "stick" the better they are at addressing the problem. The fact that nobody is satisfied with this fix is a good sign.


There is nothing more permanent than a temporary solution.


> By making the protocol centralized and stupid-simple, it's also stupid-simple to replace in when everyone is done painting the perfect bikeshed.

Can you recall any example of anyone replacing a centralized protocol with a decentralized one?


Didn’t Bluesky ship centralized, and then later replaced the centralized protocol with the decentralized at proto?


Did they? Heh, I didn't know that. But I thought they launched with the AT protocol already, no?


They did, which is why it seems like a relevant example to your question. They shipped centralized, and have already replaced the centralized service they shipped with a decentralized service.


Threads sits on top of the Instagram infrastructure.

And they have added ActivityPub integration moving everything closer to decentralisation.

Given how much of a win-win for Meta it is it wouldn't surprise me to see all their networks move in that direction.


> Given how much of a win-win for Meta

How much?

> to see all their networks move in that direction.

Why would they? What exactly will the move entail?


a) They can monetise content that didn't originate on their platform.

b) It shifts regulators attention from them to closed platforms like X.

c) They can leverage their advantages e.g. ad serving, safety to push competitors into niches.


> They can monetise content that didn't originate on their platform.

They have been doing it for years.

> It shifts regulators attention from them to closed platforms like X.

It doesn't. Threads is just as closed (despite integrating an open protocol), and is still subject to the same scrutiny and provisions as the rest of Meta's products.

> They can leverage their advantages e.g. ad serving, safety to push competitors into niches.

So, let me get it straight. Facebook gained so much from adopting a decentralized protocol so they will inevitably move in the same direction that:

- they will use it to remain the only centralized service?

- they will use it to do the same thing they do before (serve ads, collect user data etc.) but somehow will be absolved of regulations and scrutiny?


Facebook messenger is not completely decentralized, but it is E2E encrypted now after years of struggle with governments and UX. It's definitely possible to move centralized systems to be more decentralized.


How is that an answer to the question?


It's an example of somebody replacing a centralized protocol with a more decentralized one. It's also one of the biggest direct messaging platforms in the world with E2E encryption.


How is it decentralized? It's running from and through Facebook servers.


Facebook cannot read your messages, so it is more decentralized than a system that stores messages in plaintext (or stores the decryption keys).


That's not what decentralized means though. This whole comment thread is unclear on whether decentralization or encryption is what's desired.


That is because people want decentralized e2ee multi-device chats without manual key management, which afaik is not really possible


Seems like its simply a more private option

it being encrypted but routed through a single companies servers means its just as centralized as if it were unencrypted though


That depends on your definition of decentralization. Because of the way most people set up their apps, almost all Matrix users and ~all Signal users are using a centralized app under this definition.


> That depends on your definition of decentralization.

Decentralization literally means "not centralized". If you have a single centralized entity serving all your messages through a set of centralized servers, it makes the setup what?

> Because of the way most people set up their apps, almost all Matrix users and ~all Signal users are using a centralized app under this definition.

Yes, they do, and it's centralized. What exactly makes you think otherwise?


Bluesky.


e2e encryption is a net loss for a lot of use cases. Particularly, most DMs are spam in my experience.

Spam prevention is much harder if the server can't see the message. Spam reporting can be done with sufficient effort, but stopping the known spam from reaching the user in the first place is impossible (the closest you can get is a client-side scan before actually showing the message to the user, which requires downloading the whole message just to show "number of incoming messages" indicator or else having the indicator lie).

And of course, E2EE is a lie if you're visiting a website anyway.


It is my understanding that many E2EE chat systems won't actually E2EE your initial message to someone you aren't already mutual in-app contacts with.

Either E2EE is something you "upgrade" an existing conversation into (only after both sides consent to the conversation); or E2EE is something that only inherently establishes once both sides have sent one-another a message; or E2EE is something you can only enable before you start a conversation, if you already have the other person's public key (which you only get when you request to add them as a contact, and they accept.)

I think schemes like this balance privacy with spam-prevention quite well: privacy-conscious people can explicitly add each-other before either person says anything / can send intentional small-talk as pairing messages; while everyone else gets the benefit of a central spam-filter sitting between them and messages from strangers.


Except they'll never replace it because they'll be too busy making some other feature stupid simple by centralizing it and we'll be back to centralized social media.



Obfuscating your commandline is racy. The only fix for issues like this is to not pass passwords on the commandline, except when debugging.


This is an issue with any software that tries to maintain backwards compatibility, not Linux. Windows has:

- Many years worth of different control panels.

- Little consistency with respect to toolkits in general.

- Fractional scaling issues in applications using older toolkits (e.g., open up the policy editor and notice the blurry fonts). Microsoft is actually giving up here and has been experimenting with ML-based scaling for old applications (an approach I expect we'll eventually see in Linux as well).

Apple handles this by breaking compatibility every so often, forcing old software out of the picture.


Window's backward compatibility is way better though. There are plenty of GNOME extensions and applications that just don't work anymore under modern GNOME Wayland.

Example: all the redshift applications and extensions for lowering the screen brightness.


It's seemed a little odd to me, particularly given Linus' attitude of "never break userspace", that so many high-profile projects decide to break backwards compatibility almost for the sake of it (particularly GTK/GNOME).


I notice that even the blurry scaling on Windows looks better than what we have on Linux. It seems that they have some special algorithm for that. Anyone who knows how it is implemented can chime in here?


It's definitely better than bilinear and bicubic. Looks like Lanczos but with some optimization for ClearType.


My FW13 AMD laptop (61Wh battery) can last 11hr+, technically. If I'm doing anything other than light web browsing, that quickly drops to 8hr. If I'm watching videos, it's more like 5hr.

Unfortunately, at least on Linux, it requires quite a bit of tuning for the moment. But there are some pretty good guides.

Suspend battery life still isn't great, but it's _much_ better (with s2idle supported) on the latest-gen AMD platform.

I previously had the 11th gen Intel and... I got much better battery life than you, but it was still pretty bad.


This is really interesting to me. I too have an 11th gen Intel machine running Arch, and while I get better battery life than 2 hours, it's still the weakest part of the system, and I very rarely put it to sleep, I just turn the whole machine off. Someday I was planning on upgrading to the AMD motherboard, but didn't really see a reason to do so yet, but this might accelerate my plans.


Yeah, sleep on the 11th gen is basically worthless. But the battery upgrade (especially after a few years of wear and tear) and the new AMD board are worth it.

... unless you watch a lot of video. Hardware video decoding uses more power than software video decoding in many cases: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10223


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: