Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, the problem with juniors and performant code has little to do with lifetimes. The issues I see are as easy in C#, Java, or Rust as C++ - e.g. repeatedly using string appends rather than building up a list and doing a single allocation, or holding a lock for the duration of a reload and parse instead of just the reference swap, etc. Thread safety, more often than not, isn't that hard - just do the work on one thread (I work on a web service with thread-per-request).

I'm optimistic about Rust but realistic (I think) about how much inertia the common project (C++) really has. I'm also decidedly skeptical about Rust's cure-all possibilities - while I've seen a dozen or so real lifetime bugs in a decade, about half of them were caused by a developer misusing an API, deliberately calling a method that is for transferring lifetimes, and then not assigning the reference to a new object.

In my experience, most bugs are just badly written code, not the sort of problem your compiler can fix.



> repeatedly using string appends rather than building up a list and doing a single allocation

I think this "make a list" idea performs best where you know exactly how many things will be in the "list" but have little idea how long the concatenated String might be, so you can keep the "list" as a local array (no allocation) and do a single String::with_capacity (one allocation) and then push all the references onto it.

Otherwise, obviously we're not going to go around assembling a LinkedList since this is the 21st century and we're not crazy, so let's assume if we don't know how long that "list" is we're accumulating references in a Vec<&str>. We don't know how big to make the Vec either, and so that will grow as we push items onto it.

Rust's Strings are - under the hood - basically a Vec<u8> that promises to be valid UTF-8. So we're playing with the exact same allocation algorithm and same amortized costs in String and Vec<&str>.

If we're appending very few Strings in the typical case, the "list" approach costs extra because we're doing a bunch more allocations + copies to make the Vec<&str> that we're throwing away shortly afterwards. Likewise if the strings are very short.

And after all that it's more complicated, which counts against it unless performance was critical.

If I saw a junior reflexively doing this (with a Vec or some other "list" structure) in code I was reviewing, I'd ask them why. If their answer is "performance" I'd remind them: If we aren't measuring it then it isn't performance it's just wanking. So, where's the measurement?


Depends on your string really. If your string are immutable (as in C#), it's not the same amortized cost and you end up wanting to use a StringBuilder almost every time. It's not premature optimization - it's just the standard pattern for performant coding.

But I see your point - I inadvertently gave an example that is performance basics in some languages and fine tuning in others.

The lock example is perhaps more relevant.

Another good example is appropriate caching. Often services want to parse something (e.g. an ini file) and cache it in memory. Caching the right (fully parsed) format will lead to a highly performant configuration system - taking less than 1% of CPU time - but let some juniors do parsing on top of the caching layer rather than underneath your caching layer - and suddenly your config system is 5-10% or your service CPU and is a target for optimization. Most of my daily work in growing juniors is patterns like this, nothing a language would help with.


Here is another example, using a for loop instead of an intrinsic like System.arrayCopy().




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: