Hacker News new | past | comments | ask | show | jobs | submit login

> 10 Mbyte. Today Lisp runs fine on an modern processor and some people tinker with Lisp-based operating systems, again.

My main desktop machine has 16 GB RAM, and its processor cycle speed is probably also 1600 times as much as that of a machine from back then. Why can't I get my computer to perform 1600 times as much concurrent work as back then?

> Every iPhone does that now, since Apple's Objective-C and the iOS frameworks are actually that: efficient dynamic allocation, of runtime-typed small and large objects.

That's a luxury that we can nowadays afford. Even the cheapest entry-level smartphone is ridiculously more powerful than an 80's era workstation. But the amount of work computers do for us hasn't grown proportionally to the amount of computing power.




> Why can't I get my computer to perform 1600 times as much concurrent work as back then?

Is that a real question?

> But the amount of work computers do for us hasn't grown proportionally to the amount of computing power.

I don't know about you, but my laptop is much faster than previous machines. Software which used to compile with the Lisp compiler in half an hour, now compiles in a few seconds.


...exactly. So there's computing power to spare.


To spare, you say? I want to use that computing power for my own purposes. I suddenly decided I want to multiply gigantic matrices.


Okay. Go write some C, or some Asm. It might feel unpleasant to you, but if you use most other langauges, you'll be wasting too many cycles, and you won't be using that computing power for your own puposes. Don't forget to use as few abstractions as possible, and remember: every function call takes valuable time and computation.


> Don't forget to use as few abstractions as possible

Huh? I'm perfectly fine with abstractions. Just not with abstractions that are suboptimally designed and implemented.

> Don't forget to use as few abstractions as possible

Maybe you're confusing abstraction with runtime dispatch?

> every function call takes valuable time and computation.

Stroustrup's (whom otherwise I don't regard as a great language designer) principle applies: abstractions must elaborate to the best possibly code you would've manually written.


Every abstraction has a cost somewhere. If you want to use your computer to its full potential, avoid them all.


> Every abstraction has a cost somewhere.

It's very simple:

(0) Runtime performance degradation is unacceptable.

(1) Increased compile times are annoying, but they're tolerable in exchange for more exhaustive automatic static analysis. Anything of interest that the compiler can't prove about my code, I would have to prove myself by hand anyway, which would take even more time than the slowest automated static analysis.


> Runtime performance degradation is unacceptable

Requirements like these are not grounded in real software use.

In economic terms: a cheaper/slower software can be acceptable over a more expensive/faster software. Example: Java-based software vs. the same written in C++.

In reliability terms: a more robust / slower software can be acceptable over a more expensive/faster software. Example: Erlang-based software vs. C++ software on a network switch.

And so on.

Abstract/absolutist requirements like 'Runtime performance degradation is unacceptable' is not often found in actual software development/use.

Software usually has a multitude of different important qualities. Raw performance is just one of these. Especially one then needs to say what performance exactly and how to measure it. Performance can be measured in many ways: throughput, latency, micro-benchmark speed, etc. etc. Optimizing one (say, throughput) then has effects on others (say, latency). To achieve faster throughput, some functionality can be affected.


All abstractions degrade runtime performance. Even "zero cost" ones, as they get you to think in less efficient ways, about things like encapsulation, and whatnot.

If runtime perf degradation is really unacceptable to you, you shouldn't be writing in functional languages, or even C. You should be writing in raw asm, as low-level as possible.


Encapsulation is making the internal representation of abstract data types not visible to their clients. In other words, it's a discipline for writing programs. Some languages (e.g., ML) just happen to helpfully enforce that discipline for you. It has no runtime impact whatsoever.


Not true. While the various abstractions themselves may have no runtime impact, they encourage programming in ways that, while they may be good for maintainability, aren't as fast. For example, if you're using ML, you'll likely represent a list of objects (say, structures representing addresses) by establishing a type for the objects, and keeping a list of them. A C programmer might use a statically allocated array, or actually embed a pointer to the next object in the list into the struct. The C program might be less maintainable, but it would be faster because it's doing less pointer chasing. An asm programmer might optimize further.

Now, this is a fairly trivial example, and not too hard for a compiler to optimize out, IIRC, but it's just an example. Other instances of this are far harder to optimize away.

The further up you are from the hardware, the less aware you are of the perf tradeoffs, and the less you can do to fix it. So if you want the best perf, go for asm.


> For example, if you're using ML, you'll likely represent a list of objects (say, structures representing addresses) by establishing a type for the objects, and keeping a list of them.

In ML, most of the time I don't work with objects at all. I work with values. The beauty of programming with values is that the language implementor can represent them however he wishes, as long as he respects the semantics of the operations on said values. For example:

(0) Multiple logical nodes of a recursive data structure can be represented as a single physical node, eliminating the need to store pointers between them. For example, an ML implementation can determine that `List.tabulate (n, f)` always creates a list with `n` elements, and thus always pre-allocate a single large enough buffer before actually computing the elements. In Lisp, this wouldn't be a valid optimization, because Lisp mandates that every cons cell has its own object identity.

(1) Values with large representations can be automatically deduplicated, hash-consed or whatever it takes to reduce their memory footprint. Again, this is only possible because physical identities don't matter in ML (except for mutable cells and arrays).

In other words, values give the language implementation freedom to make useful physical data structure choices. Unlike objects, which come tied to a fixed representation.

There is no tradeoff between abstraction and efficiency, when you use the right abstractions.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: