All kinds of things. Personally, in the medium term I'm concerned about massive loss of jobs and the collapse of the current social order consensus. In the longer term, the implications of human brains becoming worthless compared to superior machine brains.
Those things won't happen, or at least, nothing like that will happen overnight. No amount of touting baseless FUD will change that.
I guess I'm a Yann LeCun'ist and not a Geoffrey Hinton'ist.
If you look at the list of signatories here, it's almost all atheist materialists (such as Daniel Dennett) who believe (baselessly) that we are soulless biomachines: https://www.safe.ai/statement-on-ai-risk#open-letter
When they eventually get proven wrong, I anticipate the goalposts will move again.
Luckily I haven't read any of that debate so any adhominems don't apply to me. I've come up with these worries all on my own after the realization that GPT-4 does a better job than me at a lot of my tasks, including setting my priorities and schedules. At some point I fully expect the roles of master and slave to flip.
Good thing unemployment is entirely determined by what the Federal Reserve wants unemployment to be, and even better that productivity growth increases wages rather than decreasing them.