Essentially, yes. Concurrency concerns in games are quite different to most applications (because of the high value of latency and the low value of fidelity[0]): typically there are around a dozen large computations to do 60 times a second (AI, animation, physics, collision, audio, effects, rendering, UI, game mechanics, pathfinding, networking), and the environment for games has been manycore (in the sense of both number and inhomogeneity of processor) since the PS2.
Typically the approach is to push an "object graph" (actually it's usually a bunch of arrays) down a pipeline, for example animation → physics → rendering (for player walking), or collision → game mechanics → UI (for someone being shot). For cache coherency reasons, all the stages are done in a single operation, so where the data pipelines are too long to process in a single frame it's very common to defer processing to a later frame--for example a footfall event (caused by an animation) might trigger a sound effect on the next frame, because audio processing takes a relatively large amount of frame budget (which is OK because it has its own processor, and because we know that the next frame will come around in less than 16.7ms).
I think that pursuing a row-locked model is unlikely because of the amount of indirection involved--the Xenon (in the 360) and the PPE (in the PS3) are both in-order processors, so synchronous reads and writes to memory (obviously necessary for shared data) are absolutely destructive to performance.
[0] When I say "the high value of latency and the low value of fidelity", making a tradeoff involving a, say, 5% performance hit for safety or scalability is a no-brainer for most environments, e.g. a web app. In a game it can well mean the difference between being able to release the game, and having Sony and MSFT refuse to licence it because the framerate is too unstable.
Similarly, a strategy that involves throwing away user data if it becomes too large would be, let's say, eyebrow-raising in most industries, but it's very common to have fixed-size particle buffers and to simply retire old particles even if they haven't died naturally. This sort of "cheating" is pervasive and necessary in games programming, and honestly, it's part of the fun. :)
Mapping an update function over a list of entities is a form of concurrency (note: not parallelism)... If you ever write a game engine from scratch, there are several tricky issues you must deal with because of this -- for example, the death and birth of new entities caused by traversing the list.
Consider what happens if your game character's arrow kills a monster while you are iterating over the list of entities. Do you remove it from the list right then? If other objects are depending on e.g. the number of monsters present, then you have a race condition where some objects will be updated before the monster was removed and some objects will be updated after the monster was removed and will thus see inconsistent views of the current state of the world... this can cause all kinds of fun, intermittent bugs!
Are there any examples of using something like relational tables for game state?