Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean, even if that is exactly what "x-risk research" turns out to be, surely even that's preferable to a catastrophic alternative, no? And by extension, isn't it also preferable to, say, a mere 10% chance of a catastrophic alternative?


> "surely even that's preferable to a catastrophic alternative, no?"

Maybe? The current death rate is 150,000 humans per day, every day. It's only because we are accustomed to it that we don't think of it as a catastrophy; that's a World War II death count of 85 million people every 18 months. It's fifty Septebmer 11ths every day. What if a superintelligent AI can solve for climate change, solve for human cooperation, solve for vastly improved human health, solve for universal basic income which releives the drudgery of living for everyone, solve for immortality, solve for faster than light communication or travel, solve for xyz?

How many human lives are the trade against the risk?

But my second paragraph is, it doesn't matter whether it's preferable, events are in motion and aren't going to stop to let us off - it's preferable if we don't destroy the climate and kill a billion humans and make life on Earth much more difficult, but that's still on course. To me it's preferable to have clean air to breathe and people not being run over and killed by vehicles, but the market wants city streets for cars and air primarily for burining petrol and diesel and secondarily for humans to breathe and if they get asthsma and lung cancer, tough.

I think the same will happen with AI, arguing that everyone should stop because we don't want Grey Goo or Paperclip Maximisers is unlikely to change the course of anything, just as it hasn't changed the course of anything up to now despite years and years and years of raising it as a concern.


I think that the benefits of AGI research are often omitted from the analysis, so I'm generally supportive of considering the cost/benefit. However I think you need to do a lot more work than just gesturing in the direction of very high potential benefits to actually convince anyone, in particular since we're dealing with extremely large numbers, that are extremely sensitive to small probabilities.

EV = P(AlignedAI) * Utility(AGI) + P(1-AlignedAI) * Utility(ruin)

(I'm aware that all I did up-thread was gesture in the direction of risks, but I think "unintended/un-measured existential risks" are in general more urgent to understand than "un-measured huge benefits"; there is no catching up from ruin, but you can often come back later and harvest fruit that you skipped earlier. Ideally we study both of course.)


If the catastrophic alternative is actually possible, who's to say the waffling academics aren't the ones to cause it?

I'm being serious here: the AI model the x-risk people are worrying about here because it waffled about causing harm was originally developed by an entity founded by people with the explicit stated purpose of avoiding AI catastrophe. And one of the most popular things for people seeking x-risk funding to do is to write extremely long and detailed explanations of how and why AI is likely to harm humans. If I worried about the risk of LLMs achieving sentience and forming independent goals to destroy humanity based on the stuff they'd read, I'd want them to do less of that, not fund them to do more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: