> If dynamic array bounds checking cost 5% (narrator: it is far less than that)
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.
It's not that simple either - normally, if you're doing some loops over a large array of pixels, say, to perform some operation to them, there will only be a couple of bounds checks before the loop starts, checking the starting and ending conditions of the loops, not re-doing the bounds check for every pixel.
So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!
> It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
Your understanding of how bounds checking works in modern languages and compilers is not up to date. You're not going to find a situation where bounds checking causes an algorithm to take 3-4X longer.
A lot of people are surprised when the bounds checking in Rust is basically negligible, maybe 5% at most. In many cases if you use iterators you might not see a hit at all.
Then again, if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong.
> This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
Do you have any examples at all? Or is this just speculation?
> Your understanding of how bounds checking works in modern languages and compilers is not up to date.
One I am familiar with is Swift - which does exactly this because it’s a library feature of Array.
Which languages will always be able to determine through function calls, indirect addressing, etc whether it needs to bounds check or not?
And how will I know if it succeeded or whether something silently failed?
> if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong
I agree. And note this is an example of a scenario you can encounter in other forms.
> Do you have any examples at all? Or is this just speculation?
Yes. Java and python are not competitive for graphics and audio processing.
You can write Java that uses arrays that are allocated once - will it have loops as fast as C? Why not?
And furthermore I don’t suspect you’re proposing we should be using C + bounds checking (that’s already a Gcc flag?). But rather bounds checking is one of many safety features.
The whole pitch of Java is exactly what OP said - let’s just pay 10-30% cost to get these nice portability and memory benefits over C++, and it didn’t work that way especially as memory speeds have diverged from CPU.
> Yes. Java and python are not competitive for graphics and audio processing.
Because Python is an interpreted language and Java has garbage collection, not because they have bounds checking.
Using a language like Rust the bounds checking overhead compared to C code is usually minimal, like 0-5%. The compiler can even optimize away many bounds checks (safely) if you structure your code properly, such as with iterators.
Make sure you read the note about how the author was mistaken about the 15% speedup from bounds checking. The actual bounds check change was much less, like a couple percent. You can even structure your code so that the bounds check disappears altogether when the compiler can confirm safety at compile time.
Your argument is exactly why we ended up with the abominations of C and C++ instead of the safety of Pascal, Modula-2, Ada, Oberon, etc. Programmers at the time didn't realize how little impact safety features like bounds checking have. The bounds only need to be checked once for a for loop, not on each iteration.
There are inevitably those who don't know how to program but are responsible for hiring those that can. Language popularity is an obvious metric with good utility for that case.
Even so you haven't provided any compelling evidence that C or C++ made it's decisions to be more appealing or more popular.
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.