Except the concern isn't genuine here. Some of the signatories have AI companies and interests. They want to slow the leaders down so that they can catch up. It's disingenuous.
Search "Graham's hierarchy of disagreement" which is popular on HN. Your current criticism is at the "ad hominem" level.
It is entirely beside the point what Elon Musk's motivation is. The question is 1) if the risks referenced in the letter are real, which they certainly are. There has been informed thought on this for more than a decade. Recent LLMs have made the dangers even more clear. 2) is the letter's proposal of a 6 month pause useful. It's the most useful and plausible step I can think of. We need to take stock. It's not up to a handful of researchers to endanger the lives of 6B other people. 3) Is it actually plausible that we could get such a pause. We'll see. I doubt it but also search "death with dignity" by Yudkowsky.
Back to Musk as an "authority", the headline is about him but more than 1000 other concerned parties have signed. I will sign. Their signature vetting process is stalled because to the volume of signatures, otherwise there would be many more.
> Your current criticism is at the "ad hominem" level.
Allow me to rephrase. I am deeply concerned that there is a possibility that some of the powerful parties backing this may be trying to catch up and using an enforced or agreed to "ceasefire" as a means to catch up in capability. I also worry that some may be able to use political strong arming to accomplish this as a means of unfair competition.
> It is entirely beside the point what Elon Musk's motivation is.
Is it always beside the point what anyone's motivation is? Motivation matters.
> if the risks referenced in the letter are real, which they certainly are.
Your opinion.
> There has been informed thought on this for more than a decade. Recent LLMs have made the dangers even more clear.
> 2) is the letter's proposal of a 6 month pause useful.
More opinions.
> It's the most useful and plausible step I can think of. We need to take stock. It's not up to a handful of researchers to endanger the lives of 6B other people. 3) Is it actually plausible that we could get such a pause. We'll see. I doubt it but also search "death with dignity" by Yudkowsky.
All of this is your opinion.
> Back to Musk as an "authority", the headline is about him but more than 1000 other concerned parties have signed.
I didn't even mention Musk. I have several other names in mind. Lots of folks with AI companies (including LLMs!) that "may" be experiencing FOMO and sensing a strategy here. Maybe. Hypothetically. In a non-"ad hominem" way.
> I will sign. Their signature vetting process is stalled because to the volume of signatures, otherwise there would be many more.
People are starting counter petitions, which I'll gladly sign. The one by Suhail posted on Twitter was hilarious af.
I'll also go to whatever country doesn't "pause", because this tech is literally the most exciting development of my lifetime. And I want to spend my life doing something that matters instead of gluing distributed systems together to process financial transactions.
It doesn't matter. I recognised years ago the "Nice guys get all the AI" fallacy. If some organisations agree to stop, others won't and some of those don't care if they see the world burn.
It's almost a certainty that countries with the means to do so will continue this research, if not in public then in secret. They'll see it as a royal road to nearly infinite riches and power. At the same time, letting another country take the lead will be seen as an unacceptable risk of ruin.
I really don't see AI research halting. Slowing a little, maybe, but I'm not sure if slowing it down a little and driving it underground will help.
One of the other concerns (apart from safety) is the job displacement aspect - if AI displaces too many workers too fast, that could spark a worldwide conflict (some speculate that similar automation pushes laid the groundwork for WWI and WWII).
This problem has a much better solution than blocking technical progress: UBI etc.
But, yeah, I can totally believe that our elites would prefer a solution that doesn't require a major reconfiguration of the economic system from which they benefit so much.
Job displacement is a silly problem compared to the AI Alignment problem. One is some human misery (but a lot of societal benefit to other groups of humans at the same time).
The other one is all of us, the entire species and the future of it, gets f'cked.
Not if it is a laid off biotech worker that goes mad and builds the GPT6 designed virus because his wife or kids died due to his loss of employment. We are safer, all of us, the entire species, when all the people feel the group is watching out for them and we are all in it together.
One reason to discuss job displacement is that otherwise you sound like a nutcase ("we're all going to die!")... which causes most people to dismiss you out of hand. If you talk about job loss, more people see something plausible, concrete, short term, effecting them directly, etc. You can get engagement from more people. Then you can introduce the real problem of, like, human extinction.