Hacker News new | past | comments | ask | show | jobs | submit login

the O(1) complexity is achieved as an amortized result, when taking the average time to append. which is exactly what we are allowed to assume for asymptotic analysis. however, i think we should be sensitive that real computers are machines in the real world can run into circumstances that differ greatly from the mathematical result.

for example, if you allocated a slightly too small array and then append to it just slightly too many items, you will trigger reallocation every time. ideally nobody ever writes code that bad and we get to assume that the O(1) behavior is always true. in practice we need to know how the actual algorithm works and make sure we don't accidentally code perverse cases.




Nope, that is not correct. Insert is always (amortized) O(1). This is worst time. There is no “bad case”. You only need to be aware that a single insert might take longer. But that is not relevant in an absolute majority of cases.

“Append to it just slightly too many items” – this “too many” makes sure that you appended enough items so that the expansion amortizes to O(1).

That’s the nice thing about a complete mathematical proof – it doesn’t leave any dodgy edge-cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: