Hacker News new | past | comments | ask | show | jobs | submit login

Yes, additionally I find it somewhat ironic that AI researchers talk a lot about "power seeking" behavior of AI as a primary concern.

However, seemingly overlooked, AI is itself power and we should expect that "power seeking" humans will inevitably become its custodian.




This a thousand million times.

The mislabeling of LLMs and diffusion models as "artificial intelligence" is probably the biggest marketing blunder in the history of technological progress, one that could ironically affect the course of AI alignment itself.

Smart thinkers and policymakers are going to waste their time framing the problems the tech poses in terms of "an uncontrollable intelligence out to get us" like it's some kind of sentient overlord completely separate from humanity. But super-advanced technology that can operate in a closed loop (which could be called AGI depending on who's asked) isn't necessary for humanity to crater itself. What's required for such tech to come into existence in the first place? Humans. Who's going to be using it the whole time? Humans.

And I think there's still a lot of disruptive, world-changing tech to be discovered before AGI is even a remote possibility. In reality this tech is probably going to be more like a superpowered exoskeleton for CEOs, politicians and the like to sway public discourse in their favor.

"An uncontrollable intelligence" already describes the source of a lot of our current problems... that is, ourselves.


"An uncontrollable intelligence" already describes the source of a lot of our current problems... that is, ourselves.

Yes, precisely. One of the best quotes I've seen was "Demonstrably unfriendly natural intelligence seeks to create provably friendly artificial intelligence"

The whole ASI alignment theory is a paradox. What the AI researchers don't realize, is that they are simply building an uncomfortable mirror of human behavior.


The meaning of "artificial intelligence" has always just been programs that can get results that previously only humans could do, until the moment programs can do it. For decades AI researchers worked on chess programs even though the best chess programs until 20 or so years ago couldn't even beat a skilled amateur. Now of course they can beat grandmasters. And so we decided chess wasn't "really AI". LLMs would have been mindblowing examples of AI even a decade ago. But because we now have them we can dismiss them as "not AI" like we did with chess programs. It's a never ending cycle.


Microsoft put out a 150 page paper yesterday on why GPT-4 is proto-AGI. LLM's are AI, now we're just closing the G gap.


Microsoft is hardly an unbiased evaluator of anything built by OpenAI.

And "closing the G gap" is like climbing to the top of a 10-foot ladder and saying "all that's left is to close the gap between here and the moon." AGI is much, much harder than a large language model. But then radically underestimating what it takes to get to AGI has been going on since the 1950s, so you're in good company.


Link, please?


"Sparks of Artificial General Intelligence: Early experiments with GPT-4"

https://arxiv.org/abs/2303.12712


> And I think there's still a lot of disruptive, world-changing tech to be discovered before AGI is even a remote possibility. In reality this tech is probably going to be more like a superpowered exoskeleton for CEOs, politicians and the like to sway public discourse in their favor.

Our current powers-that-be are so manifestly unsuited to have the kind of power our idiot technologists are desperate to build for them that part of me wishes for a disaster so bad that it knocks technological society off its feet, to the point were no one can build new computers for at least a couple generations. Maybe hitting the reset switch will give the future a chance to make better decisions.


I am less worried about what humans will do and more worried about what corporations, religions, and governments will do. I have been trying to figure out how to put this most succinctly:

We already have non-human agentic entities: corporations. They even have the legal right to lobby to change laws and manipulate their regulatory environment.

The talk about AI being misaligned with humanity mostly misses that corporations are already misaligned with humanity.

AI-powered corporations could render enormous short-term shareholder value and destroy our environment in the process. Deepwater Horizon will be insignificant.


Corporations, religions, governments etc are just an amalgam of human values and behavior that results in the effects we perceive. Yet, AI researchers most grand theory of successful alignment relies on simply applying our values to the AI such that it will be aligned.

You can look at any human organized entity simply as another form of power and how our values become interpreted when given power. Your observation could simply be seen as further evidence of how alignment is a flawed concept.

If you take a single individual and have them fully illicit their values and principles you will find they are in conflict with themselves. Two values that are almost universal and individually positive, liberty and safety, are also the very values that also cause much of our own conflict. So yes, we are all unaligned with each other and even minor misalignment causes conflict. However, add power to the misalignment and then you have significant harm as the result.

FYI, I've written a lot specifically on the alignment issues in the event you might be interested further - https://dakara.substack.com/p/ai-singularity-the-hubris-trap




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: