> repeatedly using string appends rather than building up a list and doing a single allocation
I think this "make a list" idea performs best where you know exactly how many things will be in the "list" but have little idea how long the concatenated String might be, so you can keep the "list" as a local array (no allocation) and do a single String::with_capacity (one allocation) and then push all the references onto it.
Otherwise, obviously we're not going to go around assembling a LinkedList since this is the 21st century and we're not crazy, so let's assume if we don't know how long that "list" is we're accumulating references in a Vec<&str>. We don't know how big to make the Vec either, and so that will grow as we push items onto it.
Rust's Strings are - under the hood - basically a Vec<u8> that promises to be valid UTF-8. So we're playing with the exact same allocation algorithm and same amortized costs in String and Vec<&str>.
If we're appending very few Strings in the typical case, the "list" approach costs extra because we're doing a bunch more allocations + copies to make the Vec<&str> that we're throwing away shortly afterwards. Likewise if the strings are very short.
And after all that it's more complicated, which counts against it unless performance was critical.
If I saw a junior reflexively doing this (with a Vec or some other "list" structure) in code I was reviewing, I'd ask them why. If their answer is "performance" I'd remind them: If we aren't measuring it then it isn't performance it's just wanking. So, where's the measurement?
Depends on your string really. If your string are immutable (as in C#), it's not the same amortized cost and you end up wanting to use a StringBuilder almost every time. It's not premature optimization - it's just the standard pattern for performant coding.
But I see your point - I inadvertently gave an example that is performance basics in some languages and fine tuning in others.
The lock example is perhaps more relevant.
Another good example is appropriate caching. Often services want to parse something (e.g. an ini file) and cache it in memory. Caching the right (fully parsed) format will lead to a highly performant configuration system - taking less than 1% of CPU time - but let some juniors do parsing on top of the caching layer rather than underneath your caching layer - and suddenly your config system is 5-10% or your service CPU and is a target for optimization. Most of my daily work in growing juniors is patterns like this, nothing a language would help with.
I think this "make a list" idea performs best where you know exactly how many things will be in the "list" but have little idea how long the concatenated String might be, so you can keep the "list" as a local array (no allocation) and do a single String::with_capacity (one allocation) and then push all the references onto it.
Otherwise, obviously we're not going to go around assembling a LinkedList since this is the 21st century and we're not crazy, so let's assume if we don't know how long that "list" is we're accumulating references in a Vec<&str>. We don't know how big to make the Vec either, and so that will grow as we push items onto it.
Rust's Strings are - under the hood - basically a Vec<u8> that promises to be valid UTF-8. So we're playing with the exact same allocation algorithm and same amortized costs in String and Vec<&str>.
If we're appending very few Strings in the typical case, the "list" approach costs extra because we're doing a bunch more allocations + copies to make the Vec<&str> that we're throwing away shortly afterwards. Likewise if the strings are very short.
And after all that it's more complicated, which counts against it unless performance was critical.
If I saw a junior reflexively doing this (with a Vec or some other "list" structure) in code I was reviewing, I'd ask them why. If their answer is "performance" I'd remind them: If we aren't measuring it then it isn't performance it's just wanking. So, where's the measurement?