Honestly I'm starting to wonder this about AI in general. I mean realistically, there's a decent chance will be looking at general AI soon. The best-case scenario endgame of that is creating a benevolent God. It might be time to start asking ourselves if that's what we want.
Pretty sure that's something that ought to have been discussed before any of this ever started, but you know, scientists, could, should. I look forward to the chaos and destruction and all these "brilliant" software developers wringing their hands saying they couldn't possibly have imagined such horrible outcomes from their fun money-making venture that just so happened to undermine the concept of a shared reality.
I think we all thought we'd be able to come to those decisions on a more gradual timeline. The breakneck pace of AI breakthroughs over the past few years have revealed: not so much.
I find myself doubting that "we all" would have been able to have such discussions or ever make these decisions ourselves. Silicon Valley and big tech spent the last few decades hijacking human psychology and employing dark patterns in technology that was supposed to be "democratizing" and "empowering" in order to maximize profit. Now we stand at this precipice, coupled with the RESTRICT Act, which I have no doubt will pass.
All's well that ends well, though. We simply don't have the resources to continue this "breakneck pace".