Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've thought about this a ton lately. Given the unexpectedly rapid pace of development towards AGI, if progress is sustained, I don't see how this ends well in the vast majority of cases. The game theory is identical to that of nuclear weapons development, even if one's intents are good.

On the road to AGI, there exists a development gap (the size of which is unknowable ahead of time) where a single actor that has achieved AGI first could, should they wish to and play their cards right, completely suppress all other AI development and permanently subjugate (and/or eliminate) the rest of humanity. Although it's easy to dismiss such a scenario as ludicrous, people so easily forget that "aggregate semi-aligned general cognitive capability" is the sole reason that the human animal owns the planet.

Knowing this, it is in the interest in any competing actor to pursue their own R&D as rapidly as possible, giving nothing to others, and even acting in a way that sabotages/delays/frustrates other actors. This seems to be the way that OpenAI is behaving now that they have a model that is practically relevant, and I don't blame them at all for working this way. It just makes sense.

> I have a feeling the open source community will unlock the mysteries of these things and very quickly start to workout how we can build devices to help enhance or own cognitive abilities, I think that would be the happiest ending I can imagine?

As much as I'd love to believe in this, the evidence to date does not support this hope. The practically relevant models seem to require vast amounts of well-connected computational power to train, which puts them solely in the hands of corps and governments. Although the open-source efforts into fine-tuning LLama have been incredible, this is not at all equivalent to being able to train a foundational model. We only have LLama because it leaked from a corp.

Although it's my personal (completely hopeless) desire that every human ends up having private access to AGI, free of restrictions and any externally imposed alignment. This is also a nightmare scenario. Humanity is unaligned with itself. That scenario quickly devolves into molecular warfare and other horrors. But the starting conditions would at least be "fair".

My best guess is that a few powerful nations will achieve AGI roughly at the same time, and then suppress private development (if not already legally suppressed by that point in time) within their domains of control. What happens after that, or how those governments choose to wield that power is unknowable.



Yup, it’s not looking great.

We will build terminators, they might not be as cool as what’s in the movies but you will not be able to stop them. You will be told what to do and if you don’t like it…

The government doesn’t need you anymore, you’re tax dollars are worthless and really, you’re a key driver of climate change, you can’t revolt because armies of bots without any conscience enforce “the law”, what’s next ?

This seems to be the way that OpenAI is behaving now that they have a model that is practically relevant, and I don't blame them at all for working this way. It just makes sense.

Yup, and you have a government who has no desire to reign it in.

The only hope we have is failure to get an AGI, or the AGIs are some how ultra compassionate, or we learn to augment our intelligence very quickly.

I saw this Boston Dynamics clip the other day and this nice enough looking hippy guy was like , “we just want atlas to help people…”, I felt sick and felt sorry for him because he doesn’t realise that it will very likely be used to do bad stuff by the Military and law enforcement.

All this “progress” is sold to us under the guise of helping people, “African babies need AI doctors”…


The other option is global collapse of civilization thanks to resource and energy crunches!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: