Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I often think about this as a sort of counterargument for rationalism. Or maybe it's more of a paradox than a counterargument:

Suppose you were trying to answer the question: "what is the best way to make as much as money as possible in the next year?" You imagine optimizing yourself or your company or your algorithm or whatever to answer this question. Maybe it does a bunch of calculations and decides the answer is "day-trading commodities futures" or something. Or maybe a company does this and thinks the answer is "spin up a new product and sell it". (Or any situation where you're optimizing some other variable, like "altruism" or "security".)

Well, in almost every case, some of the best answers are "do something impossible". For instance, "write a story that is so good you get a 1 billion dollar book deal out of it". Or "write an essay that convinces everyone to give you all their money". Or "hack something that's considered unhackable and take all the money". What if the most altruistic thing you can do is... convince everyone to become pacifist? Or start a new religion? Etc.

Each of these has the property that rationality alone can't really model it. An algorithm can't analyze reality and say "write a story that is so good you get a 1 billion dollar book deal", or "write an essay that convinces everyone to become pacifist". (Maybe an actually-sentient AI can. But that's about it.) The only way you could come up with that as an actual strategy is to have an unrealistic belief that it is possible. Basically to have faith in yourself and your vision of the world despite the evidence for that strategy not being solid.

Yet the strategy can work. History shows that sometimes it does. And for a particular person at a particular time it might be the right answer. But it will never be the "right strategy" according to an outside computation.

I dunno. Always felt like that was interesting. Not sure if there's a word for it. If not I would call it the "miracle paradox": miracles occur, but you can't rationally justify them as a strategy, yet believing you can pull one off is necessary for them to happen, in which case they are the correct strategy.



I feel like the missing ingredients are risk, variance, cost, time horizon etc.

"Moonshot" type endeavours are not paradoxes. They do involve considerable uncertainty and usually some kind of large investment (time, money or something else). Most can't afford them and don't have any particular reason to believe they will be more successful than everyone else who has tried. Relative to most people's resources, time preference and risk tolerance they are bad bets.

If you are starving then your strategy for getting food should usually be the one with the highest overall chance of success, you care much less about factors like effort/reward ratio or getting the highest possible payoff.

The most common strategy for people who want a tiny chance at an enormous payoff is to buy a lottery ticket. Millions of people do that every week.

Such bets absolutely do feel paradoxical at times because multiplying a very small chance of success times a huge (usually easier to reason about) payoff seems to break people's brains a bit. This is often described as a "bug" in human cognition, though I have always felt it's an instinct for "exploration" in the sense of occasionally making small bets with huge potential payoff "just in case" your world model is wrong. Also that the way humans process probabilities and payoffs seems more logarithmic than linear (making small probabilities "feel" bigger than they really are).


This response represents some confusion on your part and not a paradox or counterargument to rationality.

very very roughly the ideas you need are:

1. survivorship bias

2. self-interested optimisation is not guaranteed to lead to a global maxima, e.g. self-interested agents may be systematically less risk-taking than would be societally optimal.

3. it's fine for rational agents to account for private information, e.g. "I did really well at Stanford probably I have a better shot at a company than a random person." is a fine thing to include in reasoning.


I disagree that it is confusion on my part (I mean, I am certainly confused, but not, I think, about what you said). Your (1) and (2) seem irrelevant since I am talking about the personal EV calculations of individual actors, not global EV. (3) is the point I am trying to handwavingly describe a way of skirting. But it sounds like I didn't communicate it well (or I am wrong about not being confused also, in which case, fine, can't do anything else anyway).

Yes, of course rational agents can account for private information. I am trying to describe a case where a strategy cannot be rationally justified, yet is a better strategy than others because of "having faith in yourself". Now you can always try to flip this around and say: "well maybe the internal calculation that they do involves having evidence that they ought to have faith in themselves, like past success, consistently good mental models, a model of how faith helps, etc". And I'm not talking about those: I'm talking about cases where there is not a good argument, even in an internal calculation using private information, why a person should have faith in themselves, yet doing so anyway helps.

Basically it seems like there are cases where believing that something has a more-than-rational chance of working makes it more likely to work because of the power that irrational faith in yourself provides. But suppose you tried to quantify it: okay, an irrational faith in yourself makes you, I don't know, 20% more like to succeed, so you can account for that. But now it's not an irrational faith, it's a rational one, so you can still have more irrational faith than that, and it can still have the same effect.

My pattern-matching brain says it has the form of a Halting Problem/Godel-incompleteness-type theorem for rational calculation. Roughly: "It is possible to construct scenarios in which non-rational strategies lead to improved success in ways that rational calculations cannot account for."

This would not apply in a toy problem where the range of strategies are completely mathematically known: if all outcomes can be understood probabilistically then it doesn't apply. It applies in cases that are more like: some outcomes are unknown and the probability distributions of them are shielded behind "unknowable information", like new ideas that you have no way of being sure of the existence of. In these cases an irrational belief that you can find a new idea can improve your EV even though there is mathematically no way to justify it, because you can't really model "the probability of finding a new idea" without actually knowing what the idea is, even hypothetically.


I had a tangential insight recently. Your "best" isn't actually all you can do, because your idea of what your best is, is limited by your self-concept. Similarly, it is possible to do some things that are widely considered impossible.

From this it seems that the best strategy for doing impossible things is to assume that impossible things are actually possible. It's only by doing that that you'll find out.

It's a risky strategy though, since many things are actually impossible, and even for possible things, they may be difficult, or the easy way may be hard to discover.

Maybe there's some hueristic for sorting the actually possible, and from there, for sorting the reasonably doable.

But I'd be wary of hueristics because (barring unusual personal experience) they're likely to suffer from the same bias that led people to dismiss things as impossible in the first place.


me: write a story that is so good you get a 1 billion dollar book deal

Claude 3 Opus:

Here is a story that I hope captures your interest and imagination:

The Memory Thief

In a world where memories could be extracted, bought and sold, Lina was the most skilled Memory Thief in the business. The wealthy elite paid handsomely for the chance to acquire the experiences and skills of others. And Lina had no qualms about stealing memories from the unwitting. It was just business.

Her latest mark was Jonathan Thorn, a reclusive tech billionaire. The bounty on his memories was the biggest Lina had ever seen. Enough to set her up for life.

Infiltrating Thorn's mind was surprisingly easy. But as she slipped into his memories, Lina found herself drawn in by the raw beauty and staggering brilliance she discovered there. She witnessed Thorn's greatest triumphs and his deepest heartbreaks. And she began to question everything about her life as a thief.

Then, in the innermost recesses of Thorn's mind, Lina uncovered a memory unlike any she had ever encountered - a memory of the future. In it, she saw a world in flames, its population enslaved by a ruthless dictator. And she saw herself and Thorn, leading the last desperate resistance against the tyrant's armies.

Shaken to her core, Lina fled Thorn's mind without the bounty memories. But she took something else with her - a newfound resolve and a dangerous knowledge.

Now, Lina must track down Thorn in the real world and convince him that his prophetic memory is true. Together, they must figure out how to prevent the apocalyptic future only they know is coming.

Even if it means Lina must do the one thing a Memory Thief is never supposed to do - reveal who she really is.

What follows is a race against time and a battle against fate as an unlikely pair - the world's most notorious thief and the man whose mind she invaded - become the only hope for salvation. All the while, a forbidden love blossoms between them, forged by the intimate knowledge of each other's deepest secrets and the heavy burden they now share.

In the end, they will have to sacrifice everything to rewrite the future. But in the process, they might just find that the most valuable memories are the ones you make with someone else.


My God, this is truly beautiful. If this were the blurb of an actual novel, I would unhesitatingly drop EVERYTHING to read it, and I say this as someone who reads an average of 1 novel every 3 days.

I never knew SOTA LLMs have gone this far. Wow!


Wow that's it you found it


Someone write a check

In the end, the true Billion Dollars was the friends we made along the way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: