That's probably not a bad way to think about it - in an application that large, dealing with lots of data, needing high performance, it is effectively an application-specific database.
On the other hand, think about the improved cache footprint from using 2-byte object handles rather than full-size 8-byte pointers. If the "user code" keeps a lot of those handles in sequential structs, that is a huge win.
Oh, sorry, I didn't mean to be disparaging. What I meant was that if the application needs to handle lots of in-memory data, then thinking of it as an in-memory database seems like a great idea. The ECS architecture seems to be pulling in ideas from column oriented databases, there are probably lots of ideas like that which could be repurposed. (Of course there is the other side of that, which is if you can avoid storing lots of data in-memory at all, possibly using an off-the-shelf relational database, then that's a great approach too.)
I mean, calling it a database might be technically accurate, but is misleading in the sense the it gives the impression that it's a big heavyweight thing. It's not like there's a SQL implementation in there.
You jest, but SQLite run in-memory is a pretty fast database. I'm currently using it for ECS implementation in a little game I'm writing on the side (albeit in Common Lisp, and not yet very performance-heavy).
The reason I went this way is because when writing my third ECS with various access pattern optimizations, I realized I'm hand-implementing database indices - so I may as well plug SQLite for now, nail the interface, and refactor it into Array-of-Struct implementation with handles later.
The translation of that key to a memory location should be very straightforward. A few bitwise operations to extract the necessary pieces from the handle, a multiplication of the index, an addition to the private base pointer and there you go. Nothing fancy or CPU-expensive.