Hacker News new | past | comments | ask | show | jobs | submit login

Because using technology to solve human problems rarely/never works. [1] was originally written for spam and bore mostly correct. Explain how replacing spam with “AI-generated spam” changes anything? You can try to fight this stuff but it will look more like AI to detect AI (similar to our current anti-spam tech). No reason to believe cryptography has some kind of magical bullet here as it’s an unrelated problem domain. And to the person claiming you get kicked off and that prevents you from coming back ignores a) we haven’t solved being able to tie disparate online personas for a unique offline one (despite Facebook ostensibly trying really hard) b) there are all sorts of secondary problems that pop up when you try to do that (eg ignores the concept of learning from your mistakes and redemption, key things that happen frequently with the young or anyone else testing boundaries).

[1] https://trog.qgl.org/20081217/the-why-your-anti-spam-idea-wo...




> Because using technology to solve human problems rarely/never works.

You're badly misunderstanding the parent post - it is not proposing a technological solution to a human problem, but a technological enforcement of a fundamentally human solution:

> subjectively score every piece of content that crosses our phone against our social trust graph...can deputize our people in our extended social network to mark content as appropriate for kids or not, or otherwise filter

This is a social web of trust, where real people do the ranking and trust assignments - the cryptography and other technology just keeps track of bookkeeping.


Given that GPT is already difficult to distinguish from a person who’s confidently wrong, how does this web of trust system solve the problem?

The belief that anything will “solve” this seems naive when there’s 20+ years of proof of this being an “unsolvable” problem despite repeated technological, social, and legislative attempts. There might be a new normal established with new battlegrounds drawn and we learn to “live” with it, but I’m willing to bet non-trivial domes of money against there being any true “solution” here.


Yes the improving strength of GPT is magnifying the problem but is unrelated to the solution. The solution space steps outside of examining the content of the message for truth. The solution is to sign the messages using a social trust graph; and scoring posts based on the social trust distance.

It's not a very compelling argument for me that a (very short) 20 year history of failing to solve this problem means that it cannot be solved.

I'm willing to bet you $1000.00 that this will be completely solved in 10 years. In 10 years we will know if content we consume is "fake" or "real". It may still be offensive, and harmful to gullible people - but it will all be scored as to how real it is. Real will be defined as the likelihood that the content comes from a real human being type person, as agreed upon by other persons in-between you and that person. Content one hop away from you, from a friend, will have a score of 100% real, and content many hops away will have lower score. You will probably start your day by sorting the content you consume by the likelihood that it is real.

I did actually work on PGP - and I'm willing to concede it didn't succeed. But DNS works and bitcoin works (technically). So we do use various flavors of cryptographic trust to make sure that actions in a network are "real" versus "forged".

And yes - it's true that bots can creep into a social trust graph... so yeah, it make take effort to keep pruning the weeds in the garden in a sense.

Note in some ways I'm not really trying to argue FOR crypto per se - I'm just saying that the OP should at least critique crypto if they want to make the thesis they are making. And I'm arguing that it is a big omission to gloss over the utility of crypto and the argument will be hard to make that crypto is not a significantly powerful modifier to the original thesis of the OP.


You clearly don't actually know anything about webs of trust. I encourage you to read up on them: https://en.wikipedia.org/wiki/Web_of_trust

It's obvious to anyone with a passing familiarity of WoTs that you seed your web with people you know in real life. GPT is not "difficult to distinguish from a person who’s confidently wrong" in real life.


Are you going to be accepting direct confirmations only or indirect as well?

If you are only accepting direct confirmations, this means you are only going to talk to people who you meet in person. This is totally fine and will work, but then you don't need any new tech -- just ask for their email / phone / nickname on your favorite social site. Or make a private forum (or a Signal/Telegram/Whatsapp group) and invite them there.

If you are accepting indirect confirmation, then once the network grows big enough, there will be bots. Maybe some of your friends meet Greg, director of marketing for Widgets Inc., and correctly confirm him as real human, and then Greg will confirm an army of GPT telemarketer bots as "real humans" so they can do the sales and earn Greg a bonus. Or maybe your good friend gets malware on their computer and their key is used to confirm a bunch of keys without their knowledge.


May be a bit wise to point to something other than the Wikipedia page before claiming I don’t knowing something about an entire topic.

You seem to be intentionally or otherwise completely missing what I’m saying. PGP just establishes ownership of a private key. It doesn’t say anything about that person then choosing to sign the output of GPT or giving GPT that private key to do whatever with. And GPT can mimic whatever writing style you give it. Not hard to imagine giving it various writing samples of yours personally to learn from and imitate. So please explain how web of trust solves anything for that. Aside from trying to keep track which person in your personal network is a spam vector, because there are super connectors with thousands or tens of thousands of real-world contacts and logistically that’s not realistic to manage since most people aren’t cryptography nerds.

There’s also a bigger fundamental problem with web of trust in applying it here. Trust is not binary in that way. If I trust person A and they trust person B, trust isn’t a transitive property so in reality there’s nothing we can say about my trust about B. Also, trust isn’t binary. If I trust person A about specific science topics, that doesn’t mean that trust extends to other topics. Trust also isn’t static and sticky whereas it is generally treated as such in computer systems where trust needs to be scores and revoked automatically somehow (and then we’re back to an AI war to detect abuse and better GPT to avoid the abuse). This is also ignoring that trying to model human trust webs with a CS model that doesn’t work anything like it, isn’t a good recipe for success. Also, human trust webs have massive trust and scaling problems) (cough Theranos, WeWork, FTX, Madoff, etc etc). Notice how web of trust always sticks to basic cryptographic primitives which are easy to write papers about and solve academically but not a solved problem by any means in terms of defining what trust actually means or how a web of trust works in terms of AI content. Obviously PGP has been around forever and AI is a bit newer so maybe there will be interesting work coming out of this space at some point. AFAIK today web of trust buys you bupkis in terms of fighting GPT spam. I would recommend reading up any number of articles that discuss why pgp and signing parties failed. It’s not purely a UX issue. The bigger problem is that even in a “trusted” system, fraud arises spontaneously because it’s the prisoner’s dilemma problem - there’s a material advantage to perpetrating the fraud and there’s a better advantage to helping perpetrate fraud (much hard and longer political process to revoke that trust).

As a more concrete practical demonstration of this failure. Consider that certificate authorities, which are assumed to be “trusted core signatories” in a PKI system. A PKI systems is actually the same as a “web of trust”, it’s just that I delegate verification to a third party. As cryptocurrencies should have shown, people still prefer having a virtual account in a bank to manage those funds/ lower fees. Similarly, people will outsource the complexity of validating identities (CAs signing carts for websites). CAs have repeatedly abused this to the point where afaik the security community generally acknowledges that CAs are generally worthless - even the “good ones” struggle to do verification at scale and there’s so many CAs in typical lists that it’s basically guaranteed that there are malicious actors. And we know decentralization doesn’t actually work for end users because it’s too complicated a mental model. People want a named intermediary that they delegate responsibility to. That’s why most people defer their CA validation to browsers and operating systems. PGP would work similarly so now you’ve got people delegating key trust to Apple, Microsoft, Google, Signal, etc etc. and nerds who use open source verified key managers and maintain their own infra to manage these lists. But that’s not a representative sample of what end users will accept at scale. So you’re back to centralized control, which will be better than status quo as OSes and browsers realistically are more resitant to handing out broken signatures. And of course maybe better algorithms and methods will be developed to solve these shortcomings.

But a lot of these issues have existed and been documented for a long long period of time independent of the vague idea of using it as a GPT detector / blocked. I’ve been around the Bay Area for 10+ years and I remember having friends who thought this would take off any day now and worked really hard to make it happen, hosting signing parties and whatnot. It didn’t and I was pretty confident it was a pure nerd activity that wouldn’t have any impact in its current form (regardless of the UX challenges - the problems are much more fundamental and worse). Web of trust is seriously hard even in the simplest possible state which is PGP and that’s failing miserably despite being around for a very long time.

Would you agree that it’s on you to provide some supporting evidence on the claim that

A) I don’t know what I’m talking about

B) something more than hand waving “sprinkle some decentralized cryptography here” and actually explain how you solve the human problems that are so important here + why PGP has largely failed but suddenly it’s going to find a second life in GPT prevention.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: