You could look at Hyrum's law as cautionary: don't be that person who relies on unspecified behaviour. This might save you from some unpleasant breakage.
But its role for Hyrum and his team is about their burden as engineers. Any observable behaviour they change, no matter how seemingly unimportant, will break somebody who wasn't so cautious, maybe unwittingly.
Example: If you're an experienced programmer you're probably aware of stable sorts. You know if you need a sorted list to stay as it is when sorted you must use a stable sort, and you're sensitive when reading documentation - did it say this sort is stable? No? Then maybe it isn't, and I need to guard against that.
But a new programmer may just assume sorts are stable without ever thinking about it. Naively they might think of course all sorts are stable, why do the extra work to move things that are in the correct place already - without understanding that common sort techniques often aren't stable. And if they at first use sorts on objects with no sense of identity, like the integers or in many languages strings, there's no sign of a problem, all sorts work the same. [1,2,4,4,6,8] gets turned into [1,2,4,4,6,8] and that looks OK. Until one day, that programmer is working with data that has identity and then [1,2,4‡,4†,6,8] turns into [1,2,4†,4‡,6,8] when sorted due to an unstable sort that doesn't care which 4 is which, and the naive programmer's code blows up because they're astonished 4† and 4‡ changed places.
Yep, but also some docs are so underspecified that you find possibilities that the other side didn't even think of to consider, change or document. I think that complements this law (if not yet). And writing even shallow docs is already hard. When there is an knowledge gap between two parties, some assumptions are inevitable, cause otherwise each of two would have to implement overcomplicated logic over nothing. I would like if trades always preceded execution, because that could save me a couple of hours if not a simpler architecture. This is a great driver, even if that would fail once in a week, which may be fine.
Edit: since we are talking examples, there is another one, js object key order. These keys go into Object.keys(), for-in and JSON.stringify() in an insertion order. But the order is not specified, even the reproducibility isn't. So in node, chrome and maybe IE it's always predictable (yet), but Firefox couldn't care less. Same for other languages. Some of them even go to lengths of always randomizing the iteration order to prevent false dependencies.
It depends. Bare hash tables are semi-random and get reshuffled at growth points naturally. But I remember reading in some manual that that language reshuffles iteration even for immutable hash tables. Js objects are usually based on "shapes", which do not follow hash table semantics:
One of the examples inside Google was similar, I think it was the ordering of the keys in a Python map (that has no documented behavior). Some upgrade changed it from stable to random and a bunch of tests started failing.
It's possible you're thinking of Swiss Tables, Google's unordered hash map (and hash set) implementation for C++. There's a CPPCon presentation where the conceit is Hyrum interrupts the presenter each time some minor improvement to the design trips real users, reflecting Google's real experience doing this.
But Python's dict type had such terrible performance characteristics that they actually made it much faster [edited to add: and smaller!] while also adding guaranteed ordering in current 3.x versions, which is pretty extraordinary for such an important built-in data structure.
But its role for Hyrum and his team is about their burden as engineers. Any observable behaviour they change, no matter how seemingly unimportant, will break somebody who wasn't so cautious, maybe unwittingly.
Example: If you're an experienced programmer you're probably aware of stable sorts. You know if you need a sorted list to stay as it is when sorted you must use a stable sort, and you're sensitive when reading documentation - did it say this sort is stable? No? Then maybe it isn't, and I need to guard against that.
But a new programmer may just assume sorts are stable without ever thinking about it. Naively they might think of course all sorts are stable, why do the extra work to move things that are in the correct place already - without understanding that common sort techniques often aren't stable. And if they at first use sorts on objects with no sense of identity, like the integers or in many languages strings, there's no sign of a problem, all sorts work the same. [1,2,4,4,6,8] gets turned into [1,2,4,4,6,8] and that looks OK. Until one day, that programmer is working with data that has identity and then [1,2,4‡,4†,6,8] turns into [1,2,4†,4‡,6,8] when sorted due to an unstable sort that doesn't care which 4 is which, and the naive programmer's code blows up because they're astonished 4† and 4‡ changed places.