I re-read your comment and you do say "same" prefixes (I think I read it as "shared"), which AWS hasn't changed the behavior of (AFAIK). You're right that objects with the exact same prefix are still routed to the same shard (set?) and have that throughput limit.
P.S.: In a few projects I've built prefixes using timestamps, but not at the very beginning, and worried that they weren't getting sharded out. The change I linked to fixes that problem.
But if you put many files with the exact same prefix - even sequential dates - then you hit the threshold the doc says, about 5000 ops/sec.