Hacker News new | past | comments | ask | show | jobs | submit login

I remember when I was in COMPSCI-101 or something like that and was introduced to sorting, and after coming up with a basic bubble sort algorithm, thinking that there was no way that could be improved because it looked self-evident that you needed to do all the work the algorithm was doing or you couldn't be sure the list was really sorted. Then, of course, I was introduced to multiple better algorithms like merge sort and quick sort which showed just how wrong I was...

Perhaps, for the "scientist" writing that initial code, their algorithm was also already the best that could be done because they just don't know much about algorithms and techniques to make them faster?! But once you learn about things like divide-and-conquer, work avoidance etc. it becomes pretty much self-evident when your algorithm sucks... it's really not that hard - definitely not for people whose job is stuff like climate science who have a very good understanding of maths.

Maybe by teaching scientists the basic "smart" algorithms and just a little about algorithm analysis (Big O notation), massive computation improvements could be had.




For you, me and James Hiebert, this sort of optimisation is exhilarating. Getting a half-hour job down to milliseconds? Awesome!

But for the scientists that wrote the original code, maybe not. Maybe they think of this sort of thing as drudge work, something that doesn't really hold their interest. Their fun is in designing the mathematical concept, and turning it into code is just a chore.

So yeah, we could teach scientists. But even better would be if we could provide scientists with tools that are just naturally fast when expressing problems on their own terms.


> Their fun is in designing the mathematical concept, and turning it into code is just a chore.

It's not about fun. Sometimes it can be the difference between something being even possible or not. In this case, the author said they ran this algorithm hundreds of times. So changing it from 30 mins to 0.1 second makes things that were impossible before, possible. I don't find it fun at all to optimise, but I can see where I may need to in order to make things better, or possible at all... what I am suggesting is that anyone writing code, scientist or not, need to be aware of this - and know when they just MUST optimise.


As an ex scientist, I think basic algorithms theory should be incorporated into scientific computing classes. I took a few of these but none of the concepts from this area was covered. I remember well discovering some code of mine was slowing down with system size and finally realizing it was because “append” was creating a new array each step… had no clue that would happen. Was enthralled by the online algorithms course when I finally discovered it - hash tables!!!


Also, spending three hours optimizing away thirty minutes you only run four times is a net negative of one hour.

You have to optimize the system, as a whole.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: