How about message-passing without serialization? Imagine that what we got in the 1970s instead of Unix was something like an Erlang VM, in the sense of each OS "process" having an inbox ring-buffer to which other processes could write; and every process having a set of basic agreed-upon tagged types, such that processes could stick raw-memory structs into other processes' inboxes, as long as they consisted only of those basic tagged types.
Anything going beyond ASCII/UTF-8 or C types will be very hard to agree upon. And I think there is no way around serialization ( == chasing pointers and converting to a contiguous chunk of memory). Except if you restrict to a single language ecosystem like Erlang or Haskell, which enable data sharing between (often userspace-) threads in a shared address space. But these need a much more complex runtime.
Almost every "language ecosystem" has a runtime which itself—at least on ⋆nix—relies on libc. This is why every modern language exposes e.g. a FILE type with the same semantics; they're all just wrapping the libc FILE struct + API.
What I'm imagining is a world where "libc", from way-back-when in the 1970s, exposed declarations for some more interesting ADT types + APIs than just register-sized scalars and ASCIZ strings. Not types that require heap allocation or anything, mind you; just some fancier on-stack value types.
Imagine the C runtime being just "batteries-included" enough that if you were writing, say, a JSON parser in C, there would be obvious "native C types" to decode each of JSON's container types into.
And now imagine a world where every modern "language ecosystem" had been built up on top of this libc, instead of the one we have. A libc one can rely on to represent almost all the basic types one needs in a well-known (or at least, consistent per machine) manner in memory.
Imagine how easy message-passing IPC would become in such a world.
That's the idea behind zero-copy serialization protocols like Cap'n Proto [1] and FlatBuffers [2]. The idea is that they define a particular mapping from memory layout to data model, and then provide bindings to that mapping from multiple languages. If you write them into, say, shared memory or a memory-mapped file, then multiple programs in multiple languages can access them, effectively for free.
There still remain a lot of real-world gotchas with this. There are compromises in versioning & upgradability; while both these protocols are designed so you can add new fields in a backwards-compatible way, they also bloat the memory requirements when you modify a schema enough. Code complexity is very high, which means the number of implementations is fairly limited. And many managed languages don't do well with direct memory addressing (Java, V8, and C# are particular offenders), so oftentimes the speed benefits are lost in translation.
But people have certainly been thinking about this issue.