I'm guilty of having written code like this on occasion, as I'm sure we all are. It's easy to look at these changes and say "of course it's slow", but for every instance of this that OP fixed there's probably 99 other occurances of the same thing that _don't_ matter for performance. In a perfect world, sure, everyone would know whether they're on the hot path or the slow path and prioritise accordingly - it doesn't matter if something that is called once a day takes 15ms instead of 0.15ms, until someone adds it to the hot path and it gets called 290 times.
I stubbornly hate code like this - but largely because of the security and correctness nightmare this is when combined with user generated data.
For example, in a library one of my coworkers wrote, object paths were flattened into strings (eg “user.address.city) and then regular expressions and things worked on that. Sometimes user data ended up in the path (eg “cookies.<uuid>”). But then - what happens if the user puts a dot in their cookie? Or a newline? Does the program crash? Are there security implications we aren’t seeing? What do we do if dots are meaningful? Do we filter them out? Error? Escape them (and add a flurry of unit tests?) Are there bugs in the regex if a path segment is an empty string?
It’s a nightmare. Much better to have paths be a list of keys (eg [“cookies”, uuid]). That’s what they represent anyway, and the code will be faster, and this whole class of string parsing bugs disappears.
Oh I totally agree. It's also my gripe with using Unix tools in general and chaining together tools in bash with pipes.
Perfect is the enemy of good though, and probably 90% of my code is done properly, and I cut corners where I think it's appropriate - sometimes thats ok, and sometimes in hindsight it's not.