Hacker News new | past | comments | ask | show | jobs | submit login

It's not that performance it's not a priority, it's just that it you can achieve it later by optimizations. "Premature optimization is the root of all evil" said once a wise man (was it Dijkstra?);

Anyway, GIMP is getting faster - with 2.9 you can already use GPU accelerated operations and plugins, if that is your concern.

What you don't want is a program that crashes all the time, or one that is so convoluted (because "performance"), that nobody is able to develop it further, since nobody can wrap his head around of all the implication of a modification.




No, it's not about premature optimization but designing towards performance. It's always a balancing act but the earlier you make the right choices the better. There are mandatory questions during the design process, before writing a single line of code. "Is it going to be fast enough?" is one of them and you can't answer it by saying "I'll figure out how to optimize it later" or "computer power will eventually catch up".

Oh, and btw. Stop using the "GPU accelerated" argument as it was a silver bullet. GPU is not a crutch to make slow operations barely usable, it should be used as an extra performance boost for operations that already work.


GPU is quite the silver bullet, IMHO. It's a chip specifically designed for graphics, it's a Graphical Processing Unit. You can't expect the performance of a general purpose unit to be even anywhere close.

There are cases where coding for performance from the get-go is the way to go - think the_silver_searcher vs ack; there are cases where "computer power will eventually catch up" - the GNU coreutils did just that, they where coded for a system where a large amount of RAM was available(even if at the moment, in the '70's it wasn't), and removed many limitation that the proprietary tools had at those times.

But in a large program, that aims to grow even larger as features add up, performance is not your primary concern. If you have good abstractions, and you optimize at pixel conversion layer by 50% (say babl), all those optimization will add up. If you write your code in assembler, for speed's sake, you'll never surpass the complexity of MS Paint.

I worked on a web application that was a bit over 100k lines of code (at least the subsystems I had access to) - some bad architetural decisions made progress stall, and features were delayed by as much as half a year. Every change at deeper levels broke all kind of things at the UI level. Those were some shitty days, trust me.


> No, it's not about premature optimization but designing towards performance.

So we, in fact, completely agree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: