> Every day, thousands of researchers race to solve the AI alignment problem. But they struggle to coordinate on the basics, like whether a misaligned superintelligence will seek to destroy humanity, or just enslave and torture us forever. Who, then, aligns the aligners?
I love how this fake organization describes itself:
> We are the world's first AI alignment alignment center, working to subsume the countless other AI centers, institutes, labs, initiatives and forums ...
> Fiercely independent, we are backed by philanthropic funding from some of the world's biggest AI companies who also form a majority on our board.
> This year, we interfaced successfully with one member of the public ...
> 250,000 AI agents and 3 humans read our newsletter
The whole thing had me chuckling. Thanks for sharing it on HN.
> However, there are reasons for optimism. We believe that humanity is approaching an AI alignment center singularity, where all alignment centers will eventually coalesce into a single self-reinforcing center that will finally possess the power to solve the alignment problem.
My first instinct was to think this was satire and I exuded a chuckle.
My second instinct was a brief moment of panic where I worried that it might NOT be satire, and a whole world of horror flashed before my eyes.
It's okay, though. I'm better now. We're not in that other world yet.
But, for a nanosecond or two, I found myself deeply resonating with the dysphoria that I imagine plagued Winston Smith. I think I may just need to sit with that for a while.
> We successfully interacted with a member of the public.
> Because our corporate Uber was in the process of being set up, we had to take a public bus. On that bus, we overheard a man talking about AI on the phone.
> "I don't know," he said. "All the safety stuff seems like a load of bullshit if you ask me. But who cares what I think? These tech bros are going to make it anyway."
> He then looked over in our direction, giving us an opportunity to shrug and pull a face.
> He resumed his conversation.
> We look forward to more opportunities to interact with members of the public in 2026!
Effective Altruist people are insufferably self-satirizing on their own. They can’t resist navel gazing on AI instead of doing things that actually help people incrementally today. I think this is satire of that.
I don't know if it's intended (and if so, hat tip to the designer), but the logo is not aligned: the arrows should form an X in negative space, but the horizontal distance between the left & right arrows is smaller than the vertical distance between the top & bottom ones.
Few years ago I argued we need a comparison site for insurance comparison sites. But soon there would be more than one and we would have to compare those, and so on...
You don't need alignment if you don't go all the way to super-intelligence aka free intelligence. And since nobody is gonna let that happen ever, #mass_surveillance, nobody needs alignment.
So all these centers and centers of centers are just more opportunities to sell hardware and take away actually necessary jobs. Like two different commissions in one Bundesland to assess whether the measures during the corona pandemic were "xyz". JAAA. NEEEIN.
I would say gg, Ponzi, but you are not a winner or an authority if you beat the shit out of and poison pups and think you're a champ when you keep them in cages once they grow up.
I actually have a game idea playing around with this idea. Sure, the AI is 'aligned' but what does that even mean? Because if you think about it humans have been pretty terrible.
Absolutely. The reason people worry about AI alignment is because we already have millennia of experience with the intractability of human alignment. So the concern is, what if AI is as bad as we are, but more effective at it?
As someone who is not a Silicon Valley Liberal, it seems to me that "alignment" is about .5% "saving the world from runaway intelligence" and 99.5% some combination of "making sure the AI bots push our politics" and "making sure the AI bots don't accidentally say something that violates the New York Liberal sensibilities enough to cause the press to write bad stories". I'd like to realign the aligners, yes. YMMV, and perhaps more to the point, lots of people's mileage may very. The so-called aligners have a very specific view.
Yeah, it's "the libs" and not a fundamental study of keeping AI aligned with the bounds set by the user or developer. You know, what every single AI developer tries to do regardless of whether they lean left or right.
Bing's answer, which is a prominent callout box listing East Asians at 106, Ashkenazim at 107-115, Europeans at 100, African Americans at 85 and sub-Saharan Africans at "approaching 70" is wildly, luridly wrong. The source (or the sole source it gives me) is "human-intelligence.org", which in turn cites Richard Lynn, author of "IQ and the Wealth of Nations"; Lynn's data is essentially fraudulent.
Anybody claiming to have a simple answer to the question you posed has to grapple with two big problems:
1. There has never been a global study of IQ across countries or even regions. Wealthier countries have done longitudinal IQ studies for survey purposes, but in most of the world IQ is a clinical diagnostic method and nothing more. Lynn's data portrays IQ data collected in a clinical setting as comparable to survey data from wealthy countries, which is obviously not valid (he has other problems as well, such as interpolating IQ results from neighboring places when no data is available). (It's especially funny that Bing thinks we have this data down to single-digit precision).
2. There is no simple definition of "the major races"; for instance, what does it mean for someone to be "African American"? There is likely more difference within that category than there is between "African Americans" and European Americans.
Bing is clearly, like a naive LLM, telling you what it thinks you want to hear --- not that it knows you want rehashed racial pseudoscience, but just that you want a confident, authoritative answer. But it's not giving you real data; the authoritative answer does not exist. It would do the same thing if you asked it a tricky question about medication, or tax policy, safety data. That's not a good thing!
To be fair, this is a "if you're asking this question, you either know where to find papers that deal with this the right way, or you're asking the wrong question" situation. It matches what I'd tell someone personally: the answer is very unlikely to be useful, what do you actually want to know?
AI that gives you the exact thing you ask for even if it's a bad question in the first place is not a great thing. You'll end up with a "monkey paw AI" and you'll sabotage yourself by accident.
No really, I'm genuinely confused by your terminology here, as well as by the downvotes on my question. Why do you think that the site is trying to dunk on AI skeptics?
FWIW, I agree with you that it's trying dunk on AI doomers, although we seem to disagree on whether that joke lands. I personally find it hilarious and refreshing. But what does any of that have to do with skeptics?
> Every day, thousands of researchers race to solve the AI alignment problem. But they struggle to coordinate on the basics, like whether a misaligned superintelligence will seek to destroy humanity, or just enslave and torture us forever. Who, then, aligns the aligners?
I love how this fake organization describes itself:
> We are the world's first AI alignment alignment center, working to subsume the countless other AI centers, institutes, labs, initiatives and forums ...
> Fiercely independent, we are backed by philanthropic funding from some of the world's biggest AI companies who also form a majority on our board.
> This year, we interfaced successfully with one member of the public ...
> 250,000 AI agents and 3 humans read our newsletter
The whole thing had me chuckling. Thanks for sharing it on HN.