Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In general you want as few as possible of both.


You could also optimize everything for future updates that optimize things even further for even more updates...

Humm.. that was supposed to be a joke but our law making dev team isn't all that productive to put it mildly. Perhaps some of that bloat would be a good thing until we are brave enough to do the full rewrite.


this is wrong for the same reason using single letter variable names to keep things concise is usually wrong.

i’d rather something a bit more verbose and clear than cryptic and confusing. there are many actors in the world with different brains.


that's right. This is the reason all my code looks like an entry to PerlGolf. /s

The world's complicated. "Every complex problem has a solution which is simple, direct, and wrong"

Simplicity is a laudable goal, but it's not always the one thing to optimize for.


Ah, but "simplicity" is not necessarily "fewest lines of code".

Code is first and foremost for human consumption. The compiler's job is to worry about appeasing the machine.

(Of course, that's the normative ideal. In practice, the limits of compilers sometimes requires us to appease the architectural peculiarities of the machine, but this should be seen as an unfortunate deviation and should be documented for human readers when it occurs.)


This is just a belief about code, and one of many. Another belief is that code and computer systems are inseparable, and the most straightforward and simple code is code that leverages and makes sense for it's hardware.

As in, you can pretend hardware doesn't exist but that doesn't actually change anything about the hardware. So, you are then forced to design around the hardware without knowing that's necessarily what you're doing.

Exhibit A: distributed systems. Why do people keep building distributed systems? Monoliths running on one big machine are much simpler to handle.

People keep building distributed systems because they don't understand, and don't want to understand, hardware. They want to abstract everything, have everything in it's own little world. A nice goal.

But in actuality, abstracting everything is very hard. And the hardware doesn't just poof disappear. You still need network calls. And now everything is a network call. And now you're coordinating 101 dalmatians. And coordination is hard. And caching is hard. And source of truth is hard. And recovery is hard. All these problems are hard, and you're choosing to do them, because computer hardware is scary and we'd rather program for some container somewhere and string, like, 50 containers together.


As soon as you start developing web sites/applications, you are entering distributed systems.


> code and computer systems are inseparable and the most straightforward and simple code is code that leverages and makes sense for it's hardware

You're missing the point. Code is separable from hardware per se, even if practically they typically co-occur and practical concerns about the latter leak into the former. The hardware is in the service of our code, not our code in service of the hardware. Targeting hardware is not, in fact, the most straightforward option, because you're destroying portability and obscuring the code's meaning with tangential architectural minutiae and concerns that are distracting.

> you can pretend hardware doesn't exist but that doesn't actually change anything about the hardware

You're mischaracterizing my claim. I didn't say hardware doesn't matter. Tools matter - and their particular limitations are sometimes felt by devs acutely - but they're not the primary focus.

My claim was that code is PRIMARILY for human consumption, and it is. It is written to be read by a person first and foremost. Unreadable, but functioning code is worthless. Otherwise, why have programming languages at all? Even C is preposterously high-level if code isn't for human consumption. Heck, even assembly semantics is full of concepts that have no objective reality in the hardware, or concepts with no direct counterpart in hardware. Hardware concerns only enter the picture secondarily, because the code must be run on it. Hardware concerns are a practical concession to the instrument.

So, in practice, you may need to be concerned with the performance/memory characteristics of your compiled code on a particular architecture (which is actually knowledge of the compiler and how well it targets the hardware in question with respect to your implementation). Compilers generally outperform human optimizations, of course, and at best, you will only be using a general knowledge of your architecture when deciding how to structure your implementation. And you will be doing this indirectly via the operational semantics of the language you're using, as that is as much control as you will have over how the hardware is used in that language.

> Exhibit A: distributed systems. Why do people keep building distributed systems? Monoliths running on one big machine are much simpler to handle.

In principle, you can write your code as a monolith, and your language's compiler can handle the details of distributing computation. This is up to the language's semantics. Think of Erlang for inspiration.

> People keep building distributed systems because they don't understand, and don't want to understand, hardware.

Unless you're talking about people who misuse "Big Data" tech when all they need is a reasonably fast bash script, that's not why good developers build distributed systems. Even then, it's not some special ignorance of hardware that leads to use of distributed systems when they're not necessary, but some kind of ignorance of their complexity and an ignorance of the domain the dev is operating in and whether it benefits from a distributed design.

> But in actuality, abstracting everything is very hard. And the hardware doesn't just poof disappear. You still need network calls. And now everything is a network call. And now you're coordinating 101 dalmatians. And coordination is hard. And caching is hard. And source of truth is hard. And recovery is hard. All these problems are hard, and you're choosing to do them, because computer hardware is scary and we'd rather program for some container somewhere and string, like, 50 containers together.

This is neither here nor there. Not only are "network calls" and "caching" and so on abstractions, they're not hardware concerns. Hardware allows us to simulate these abstractions, but whatever limits the hardware imposes are - you guessed it - reflected in the abstractions of your language and your libraries. And more importantly, none of this has any relevance to my claim.


> Code is first and foremost for human consumption. The compiler's job is to worry about appeasing the machine.

Tangentially, it continues to frustrate me that C code organization directly impacts performance. Want to factorize that code? Pay the cost of a new stack frame and potentially non-local jump (bye, ICache!). Want it to not do that? Add more keywords ('inline') and hope the compiler applies them.

(I kind of understand the reason for this. Code Bloat is a thing, and if everything was inlined the resulting binary would be 100x bigger)


`inline` in C has very little to do with inlining these days. You most certainly don't need to actually use it to have functions in the same translation units inlined, and LTO will inline across units as well. The heuristics for either generally don't care if the function is marked as `inline` or not, only how complex it is. If you actually want to reliably control inlining, you use stuff like `__forceinline` or `[[gnu:always_inline]]`.

Regarding code size, it's not just that binary becomes larger, it's that overly aggressive inlining can actually have a detrimental effect on performance for a number of reasons.


Modern cpus are optimized for calling functions. Spaghetti code with gotos is actually slower.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: