Very good article. I think the recommendations are reasonable as there really isn't any alternative. Good pointers on where to get started.
However, if you play this forward, it becomes increasing difficult to reason about. There are already AI scientist who have made posts recently about burnout of just trying to keep up. I've call this effect the "Technological Acceleration Anxiety" which will only increase going forward.
This technological disruption is unique compared to any prior as there is no end. There is no period of stabilization and adaption. It is continuous and accelerating. Human beings need some islands of stability to plan and reason about their lives. This is going to become increasingly difficult.
I've spent a lot of time thought exploring potential societal impacts. I'm not sure how we avoid the negative ones, but I keep listening, exploring and thinking.
What I find weird about AI enthusiasts is that they’re addicted to forward progress while at the same time they must understand that it’s likely going to end badly. Even for them. It’s likely going to end at least in nearly everyone being jobless or dependent on UBI and eventually running from or being destroyed by rogue terminators the more we build three capabilities into robots.
Because we’re building more powerful systems we don’t understand, we could be not only endangering ourselves and other living beings, we may also be endangering the AI too.
I see developing more powerful AI like developing more dangerous nuclear reactors, we don’t do that because it’s stupid, but with AI we do it because we also have this strange belief it will just lead to better reactors and no other problems. I do think people must be quickly starting to see this now as a reality and course correct. This thing is like a nuclear reactor of information. Almost like opening a portal to an alien world.
I’m wondering if it will get so wild we don’t see it as a good thing to keep investing in, like at some point the dangers really aren’t just theoretical, people start getting hurt, economically and socially, and as a species we just stop the billions of dollars of funding for it.
I know people will say “you can’t stop people” but I think it’s more like, the majority of sane good rational scientists who you’d actually need to profess the field might no longer be into it.
It's the techbro variant of toxic positivity to believe this ends without significant suffering.
>I’m wondering if it will get so wild we don’t see it as a good thing to keep investing in, like at some point the dangers really aren’t just theoretical, people start getting hurt, economically and socially, and as a species we just stop the billions of dollars of funding for it.
I think this is more likely than that crowd things. They love gloating "adapt or die", but they overlook that any adaptation, including their own, will take place at biological speed, which they also love gloating is slower and inferior to silicon. But they're not going to be enhanced by Neuralink - they're just going to hit their limits of what they can accept, even if they hit their limits after everyone else. Of course, who knows how much there will be to salvage by that point.
Bit late to this one so maybe you'll miss the response, but I don't think the "we" in your comment is cohesive enough to decide to limit this.
Eg. Sufficiently advanced AI obviates the economic dependence that the upper class has traditionally had on the working and middle classes for labor. There will be some, somewhere, already imagining that this is the path to a techno-utopia Disneyland where the few command all of the resources.
My conclusion is there’s nothing, literally nothing, that can be done regulation-wise because when has regulation done anything proactively in the US since the 1970s? Even when children are killed or entire towns polluted, the _reactive_ regulation and efforts are at best face saving and short sighted. So we are pretty much left to the vagaries of free market capitalism and quirks of nature and what true AI will determine its own purpose to be, to see how the next few years and decades are going to be.
I am converging on a separate topic, viz. existence of aliens in this regard. I’ve always thought that super intelligent aliens will have nothing to need from us or this planet (unless there’s some magical life force they need to extract from sentient living things), but I did conclude the one use they could have is mere observation of how we evolve for academic purposes. In that regard, this moment is indeed pivotal in history so now would be the time to check the skies and see if flying saucers are observing how we are dealing with it lol.
I’m not advocating for authoritarianism but at some stage it will become a national security issue.
You cannot have people building robotic assault weapons in their garage then unleashing them on the public. This won’t be far off now we’re getting closer to solving computer vision.
At some stage no one would be safe, the good bots will be in danger and so too the public.
Let’s make this more constructive, why is there nothing that can’t be done ? If we really as a civilisation wanted to avoid catastrophe do you really think at this stage we couldn’t pull back ?
We’re on hacker news, you need to be creative and be a hacker, but just a person who likes the Internet.
If people start using ChatGPT-5 to hack critical infrastructure and take down power grids, do we just give up and die ? Or adapt ?
I’m going to float a pretty controversial idea…the technology we have today is an experiment, the digital world. Humans can survive without it.
Already it’s causing problems such as the spread of misinformation, addiction and social division. There is nothing to say that we can’t unwind a lot of it endangers our futures. Technology is supposed to be a tool to help us out, it’s not supposed to endanger our lives. Clearly it’s out of control and causing a lot of anxiety, we’re moving quicker than we can adapt and that’s also not good for technological progress either, so we’re on the wrong path.
"If people start using ChatGPT-5 to hack critical infrastructure and take down power grids, do we just give up and die ? Or adapt ?"
Why do I keep hearing stuff like this?
First off, if ChatGPT-5 comes out and makes hacking critical infrastructure and taking down power grids easy, what makes you think its ability to counter that by hardening systems won't go up too?
And second, what makes you think that ease is the main thing stopping people from committing terrorism? We know that you can cause widespread and long lasting damage by firing at a fragile metal boxes that take months to replace and affects the ability to power entire regions.
There's something to be said about not needing to be physically there... but fear of getting caught is not really what keeps people from being terrorists. The fact is for all the unfounded pessimism the 24 hour news cycle has birthed, people just generally don't want to take down power grids, even for fun, or even if it's easy, our just out of curiosity.
-
There are reasonable angles if you want to argue for responsible AI, "it's going to turn people into terrorists" is not one of them.
Defense always has the weak hand. A malicious group with $5 million to fund hacking will defeat an organization with $500 million in defense. The only reason organizations survive is that there's far fewer hackers than organizations in the world and most hackers don't have a lot of resources, even with their offensive advantage.
Add AI to the mix and the effective available resources become much more equalized.
Some imagined advancement of AI so great that it trivializes the kinds of attacks that nation states study is.
But you're applying that to the concept of the world as it exists now.
At that point the concept of just being able to infiltrate the power grid because it's designed in a way that's flawed to infiltration isn't a given.
The idea you can't have some tireless force actively adapting to every single imaginable exploit isn't a given.
The idea you can't have a tireless force trying to come up with attacks to in turn be mitigated isn't a given.
The idea we can't suddenly design massively independent renewable power supplies with thousands of sudden advancements isn't a given.
To be honest, the idea we can't disrupt zealotry isn't a given, just like people think it'll be used to "hack" society for the worse, why can't it be used to hack society or even the individual to be less likely to want to do that? I mean if it can take down power grids easily... how much more quickly can we move to a post scarcity environment?
This doomsday scenario requires not owning your mind to what the model could do except nefarious things... but the nefarious things you're describing would be literal miracles. The odds it can only preform miracles that are taking down power grids are pretty low.
Technically, it is already easy without AI, in relative terms.
> The idea you can't have some tireless force actively adapting to every single imaginable exploit isn't a given.
However, the force attempting exploit would be other AI's at that point, not humans. If we assume the rapid disparity in intelligence power as a result of exponential growth, then you must assume some nation states will have orders of magnitude more intelligence power than others.
It is easy to imagine an end where everything is already balanced; however, between current point and time and that end is an enormous moat that is filled with such problems that we might not arrive to the other side. We certainly don't need significantly more powerful AI for it to be used to create significantly greater disruptions in society as we are already nearing a potential period of unverifiable truth and reality.
> Technically, it is already easy without AI, in relative terms.
This just feels like an awkward attempt to draw SCADA into the conversation. You're countering your own point: If it's relatively easy today, and that's proof that it's not difficulty of attacking that's saving us.
> nation states will have orders of magnitude more intelligence power than others
I'm not talking about nation states. You wanted to allude to it right? If they wanted to attack the power grid today, they could do it. The doomsaying is in this comment section about how "we're giving people robotic weapons in their garages"
> It is easy to imagine an end where everything is already balanced
What I am describing is not at all relying on balance. It's relying on the inherent asymmetry that attackers need to solve multiple problems that don't move the needle on the goal to get the LLM to attack the system, while defenders can get the LLM closer to the system and with a better understanding of it, in order to gain mitigations.
In fact if anything balance would help the attackers: In a balanced end-game anyone can get their hands on an unaligned model or build one with the capabilities we're likely to have on an individual level. But we're hurtling towards the opposite, where unaligned models lag generations behind aligned because of commercial interests.
> If it's relatively easy today, and that's proof that it's not difficulty of attacking that's saving us.
It could be perceived that way, but the argument is also that if it becomes easy enough, some actors will participate. It is the argument used for jailing current LLM capabilities.
> "we're giving people robotic weapons in their garages"
On long enough timeline this would inevitably be true if AI plays out as proponents envision. The question becomes does AI become an effective counter of all such power advancements. There will likely still be disparity among AI's of individuals and personal use. Unless society becomes more centrally managed by a global AI.
> inherent asymmetry that attackers need to solve multiple problems
Isn't there also asymmetry in that the attackers only need to find a single exploit, but the defenders need to have found all exploits beforehand?
> But we're hurtling towards the opposite, where unaligned models lag generations behind aligned because of commercial interests.
We don't have any "aligned" models. It is an unsolved problem and models have turned out to be relative easy to replicate at significant lower cost than the major commercial investments.
That's completely outdated thinking if you reach the level of AI being described.
Complex systems could be completely self healing and self quarantining, "red team" can be freely interrogated "blue team" and convinced to create attacks that are then mitigated.
And again, the AI itself would improve at self-interrogation, so we're saying "trivialize", but trivial as in tricking a system capable of hacking into power grids with ease into ignoring its training.
People who go to this doomsday scenario fail to extend any sort of lateral thinking.
An LLM that trivializes taking down the power grid would not be "GPT 4 + SCADA infiltration", it'd be a new paradigm in how humanity operates.
Blue team has to imagine every possible exploit before it occurs for a system that is essentially black box, has unknown emergent behaviors and the input to the system is anything that can be described by human language.
How many autogpt scripts can trivialize taking down the power grid?
At that point why are you imagining we have something like the current power grid? We'd have a tool that's drive massive advancements that trivialize decentralized power generation, reduce scarcity, find cures for mental illness, improve equality.
People need to really expand their understanding of how this will disrupt human existence if it can actually reach that point.
It's like imagining what would happen if you gave the roman nuclear warheads instead of what would happen if you gave them all of modern technology
> How many autogpt scripts can trivialize taking down the power grid?
Why does it need to be autogpt? It can rather create sophisticated plans for humans to exploit. Early nefarious use is more likely to go this route.
> We'd have a tool that's drive massive advancements that trivialize decentralized power generation, reduce scarcity, find cures for mental illness, improve equality.
Intellectual knowledge and cyber capabilities will vastly outstrip physical manufacturing. These threats will likely appear long before this transformation occurs.
>Why does it need to be autogpt? It can rather create sophisticated plans for humans to exploit. Early nefarious use is more likely to go this route.
That'd be infinitely harder for an LLM: Problems where the context window can grow based on feedback of its actions (like automated attacks) are the ones that will be solved much much earlier.
The teams defending against attacks will be able to trivially apply LLMs in an automated fashion to finding mitigations, if you're already admitting it's going to be LLMs writing plans that then need humans to implement them, you're countering your own point...
> Intellectual knowledge and cyber capabilities will vastly outstrip physical manufacturing. These threats will likely appear long before this transformation occurs.
Right, so you're imagining the intellectual knowledge and cyber capabilities to trivially commit terrorism, but scarcity and equality are untouched because of manufacturing.
> The teams defending against attacks will be able to trivially apply LLMs
Who are these imaginary teams defending against attacks? How does the LLM defend against another human who plans to target something that is not a LLM?
> if you're already admitting it's going to be LLMs writing plans that then need humans to implement them, you're countering your own point...
I'm just not limiting to only a single context. This is the very basis for jailing the current LLMs.
> Right, so you're imagining the intellectual knowledge and cyber capabilities to trivially commit terrorism, but scarcity and equality are untouched because of manufacturing.
Hardware revolutions always lag intellectual and cyber revolutions. Do you see that changing?
>> For the first time in all of history, we can create functions in
programming by training them, rather than writing them by hand. In other,
slightly more technical words, we can infer a function from observations and
then use it. Like babies and toddlers do. This is historic, it’s
transformative and it is a fundamental breakthrough.
Nothing could be further from the truth. Program Synthesis [1] has been an
active field of research with many real-world applications since the dawn of
computer science.
Even program synthesis from natural language specifications is not new [2].
Neural program synthesis is certainly not new [3].
Program synthesis from examples of program behaviour, without examples of
programs (i.e. what Large Language Models can't do) is also nothing new [4].
None of that is new. None of that is even particularly well done by Large
Language Models (what the author means when they say "AI" in the article).
The author is misinformed. The rest of the article is also rife with
misunderstandings and fautly assumptions but Program Synthesis is one of my
areas of knowledge so that bit stuck out as a sore thumb.
Please try to be better informed before writing lengthy articles to share your
thoughs on the internet.
However, if you play this forward, it becomes increasing difficult to reason about. There are already AI scientist who have made posts recently about burnout of just trying to keep up. I've call this effect the "Technological Acceleration Anxiety" which will only increase going forward.
This technological disruption is unique compared to any prior as there is no end. There is no period of stabilization and adaption. It is continuous and accelerating. Human beings need some islands of stability to plan and reason about their lives. This is going to become increasingly difficult.
I've spent a lot of time thought exploring potential societal impacts. I'm not sure how we avoid the negative ones, but I keep listening, exploring and thinking.
Much of my thoughts I've collected here, in case you have any thoughts or feedback. https://dakara.substack.com/p/ai-and-the-end-to-all-things