is essentially cooperative multitasking, whose efficiency is questionable in general and especially on multicore hardware - but I don't know about games. Let alone programming think(delta_t) adds complexity and introduces more states within each entity, compared to real threads with their own stacks.
John Carmack has some ideas on how to use immutable copies of the game state and functional programming concepts to create something that's easily parallelized. I saw it in an old QuakeCon keynote: https://youtu.be/Uooh0Y9fC_M
In low level audio you have something similar: there's a thread that pulls audio data all the time and it can't stop; at the same time you have potentially lots of things happening in other threads, including the UI one. You can always find ways of minimizing the synchronization overhead by e.g. having a single fast entry point to any changes that happened since the last cycle. With audio you basically end up passing immutable blocks of data with minimal synchronization. So should be possible to do in games, too.
The evolution has been from single threaded engines to different core elements being on seperate threads (e.g. renderer and io) to task based scheduling.
Typically though the updating the game itself is a relatively small part of the frame time and can be reasonably tricky to get real gains through parallelisation.
Audio is a very different problem that is typically about a chain of streaming buffers. The thread pulling audio data (and all the others processing it) is independent from the UI etc., that is not an issue.
Copying and passing immutable data around is not always fast enough and definitely not trivial to merge the results back.
In summary, it is a hard problem with no general solution. If you could solve it, then you have solved multithreading design for any problem.
Having a system thread for every entity in a game doesn’t scale. Also having to synchronise the threads of entities by hand would be quite a nightmare.
It scales a lot better to have a thread pool, with a thread for each core, and then running lightweight jobs on them. So pretty much the same idea like e.g. the goroutines in Go or the green threads of Erlang.
Having such a job system you also have a lot less hassle with thread synchronisations. I’m working on an interactive 3d application and the job system we’ve have works like a charm for parallelization.
The benefit of threads - green or not - is also that you can maintain the state in the stack and benefit from functional style programming, which is generally more robust and easier to write compared to state machines.
Parallelism has many forms and from my experience you can always find ways of minimizing synchronization in particular tasks. For example, you can sometimes replace a whole queue with a single atomic pointer when you know messages override previous ones semantically, i.e. the consumer is only ever interested in the most recent one.
Here's an interesting presentation on Overwatch's ECS system, though he doesn't go into great detail in terms of multithreading, he points out how the design makes it easy to identify what parts of the system can be safely run in parallel.