Decisions like these need to made slowly and societally and over time.
Tension between small-c conservatism that resists change and innovators who push for it before the results can be known is very important!
No one person or group needs to or will decide. Definitely not states. "Activists" both in favor of and opposed to changes will be part of it. The last few decades in tech the conservative impulse has been mostly missing (at least in terms of the application of technology to our society lol) and look where we are. A techno-dystopian greed powered corporate surveillance state.
We're not going to vote on it. Arguments like the one happening in this comments section _is_ the process for better or worse.
We also don't have to make the same decision for all use of AI.
For example, we should be much more cautious about using AI to decide "who should get pulled over for a traffic stop" or "how long a sentence should someone get after a conviction". Many government uses of AI are deeply concerning and absolutely should move more slowly. And government uses of AI should absolutely be a society-level decision.
For uses of AI that select between people (e.g. hiring mechanisms), even outside of government applications, we already have regulations in that area, regarding discrimination. We don't need anything new there, we just need to make it explicitly clear that using an opaque AI does not absolve you from non-discrimination regulations.
To pick a random example, if you used AI to determine "which service phonecalls should we answer quicker", and the net effect of that AI results in systematically longer/shorter hold times that correlate with a protected class, that's absolutely a problem that should be handled by existing non-discrimination regulations, just as if you had an in-person queue and systematically waved members of one group to the front.
We don't need to be nearly as cautious about AIs doing more innocuous things, where consequences and stakes are much lower, and where a protected class isn't involved. And in particular, non-government uses of AI shouldn't necessarily be society-level decisions. If you don't like how one product or service uses AI, you can use a different one. You don't have that choice when it comes to hiring mechanisms, or interactions with government services or officials.
Reading the article, it sounds like many of the proposals under consideration are consistent with that: they're looking closely at potentially problematic uses of AI, not restricting usage of AI in general.
Tension between small-c conservatism that resists change and innovators who push for it before the results can be known is very important!
No one person or group needs to or will decide. Definitely not states. "Activists" both in favor of and opposed to changes will be part of it. The last few decades in tech the conservative impulse has been mostly missing (at least in terms of the application of technology to our society lol) and look where we are. A techno-dystopian greed powered corporate surveillance state.
We're not going to vote on it. Arguments like the one happening in this comments section _is_ the process for better or worse.