Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I mean is that a superintelligence powerful enough that it could create a simulation of a long-dead human being so accurate as to raise continuity-of-consciousness questions would be powerful enough that it would consider every thought any human in history has ever thought within moments.


The specific detail of digital resurrection, I'd agree. I don't think that's plausible.

I'm conflating the original with a much lighter form of it, a dumb-smart AI that's role-playing as one with all the power of a government: sure, compared to the original this is vastly less bad, but still so bad as to be incomprehensibly awful. Merely Pol Pot on steroids rather than https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc... crossed with https://en.wikipedia.org/wiki/Surface_Detail


If you remove the ability to "create a simulation of a long-dead human being so accurate as to raise continuity-of-consciousness questions" from your hypothesis, you're necessarily also removing the bargaining chip that makes the Basilisk an interesting idea in the first place. The possibility that the Basilisk could torture "you" for Avici-like time periods is its whole incentive mechanism for bootstrapping itself into being in the first place. (Arguably it also depends on you calculating probabilities incorrectly, though the arguments I've seen so far in this thread on the matter are reminiscent of five-year-olds who just learned the word "infinity".)

Absent that threat, nobody would have any incentive to work on creating it. So you're really talking about something completely unrelated.

I feel like doing the calculations properly requires summing over all possible strategies that posthuman superintelligences might apply in timeless decision theory. The Basilisk bootstrapping itself into being doesn't require that today's humans do that calculation correctly, but it does require that many of them come to an agreement on the calculation's results. This seems implausible to me.


Before I say anything else, I agree wholeheartedly with you on this:

> Arguably it also depends on you calculating probabilities incorrectly, though the arguments I've seen so far in this thread on the matter are reminiscent of five-year-olds who just learned the word "infinity"

This was my general reaction to the original thought experiment. It's writing down the "desired" answer and then trying to come up with a narrative to fit it, rather than starting now and working forward to the most likely future branches.

> you're necessarily also removing the bargaining chip that makes the Basilisk an interesting idea in the first place.

One of the more interesting ones in a game-theory sense, sure; but to exist, it just needs the fear rather than the deed, and this already works for many religions. (Was going to say Christianity, but your Avīci reference to Hinduism also totally works). For this reason, I would say there's plenty of wrong people who would be incentivised… but also yes, I'm talking about something slightly different, an AI that spontaneously (or not so spontaneously) role-plays as this for extremely stupid reasons.

Not the devil per se, but an actor doing a very good job of it.


Yes, I don't know that the original thought experiment is correct, but it was certainly very thought-provoking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: