Hacker News new | past | comments | ask | show | jobs | submit login

I've been working with Haskell* for a couple of years and it is quite often that I work with code that I don't fully understand. I'll come across a terse bit of code, then carefully take it apart to see what it does (by taking bits and pieces out and giving them names instead of passing in using point-free notation and also adding type annotations). Once I see the whole picture, I make my own change and then carefully re-assemble the original terse bit of code. One could ask the question: wasn't the verbose version better? I'm going to lean on the side of no. If I left this verbose and other bits verbose then it would be hard to see the whole picture.

I think doing maths would be better if it was done interactively with software. If equations were code then you could blow it up and look into fine details and then shrink it to a terse form while software keeps track of the transformations to make sure what you write is equivalent. Maybe it's time to add a laptop to that paper pad?

* not arguing anything language specific here except that Haskell makes use of variety of notations that makes the code shorter and more like mahts. More so than most languages.




> If I left this verbose and other bits verbose then it would be hard to see the whole picture.

I really can't sympathize with this. How exactly is this helping any one at all, if you have to struggle with it yourself? Is it a bunch of dense monolithic code? Decompose into smaller methods / separate files. Setup your text-editor/IDE in an effective way for quickly navigating across large chunks of related code. Imho there is a world of difference between terseness that helps readability and code re-factoring vs. terseness that makes you want to bang your head against the monitor.

That said I think the requirements, or say qualities which define good code and good dissertation's are quite different. Code needs to be maintained, refactored and altered throughout it's lifetime, a dissertation might only need to be built up and understood once to prove a particular result which can be re-used after that.


The way I see it, I have limited capacity to build a mental picture of whatever I am working on. When I'm looking at pages and pages of verbose and repetitive code, it is quite hard. What does this bit do? Just checking the error condition and re-throwing the error. What does that bit do? Same boring stuff. Where is the meat?

When I'm looking at few lines of terse but complicated code, it is easier; it is all meat and little fat. Just enough to make a good steak.

But this only works if I understand the mechanics of that terse code. So when I work on something else for a while and I come back to some code for which I no longer have an accurate mental picture in my brain I need to refresh my memory.

I think mathematics is the same way. Imagine a full A4 page of equations. It is really hard, at least for me, to hold in my brain a mental model of what it all means. Sure, there's a ton of background that I need to be familiar with, but it's not in my mental picture. Imagine this: suppose you wrote rules for how addition works, and then multiplication, and then build it all up so you can do linear algebra. That's too much!

When I advocate terse code I don't mean it in a "here's my obfuscated C code sense". I mean that when I write "f . g . h" there might be more going on here than meets the eye, but as long as you know the rules of what . means in this context, it is super easy to follow.


I find there's a huge difference between code that fits on a single screen and code that doesn't (and I've heard claims this is backed by research). So I'd far rather have lines that I have to stare at for a while to unpack than lines that are individually simple but I have to scroll up and down or jump back and forth to see the whole method.


Proof assistants are in some ways very similar to what you described. Coq [1] is a popular example. It helps control complexity of larger proofs and verifies that everything that is derived is correct.

[1] https://coq.inria.fr/


I think it's about balancing various things. Having less code is good unless there's a hidden catch you need to be aware of or it takes a few weeks to unravel what the code does.

I prefer code that is understandable right away, is consistent and doesn't have any surprises.


A good example is Go. Many have written blogposts describing how great it is that it's such a simple language because the code is easy to read. And I can't deny that. The code is simple to read.

But then I read through pages and pages of such code and all with little meaning. Here's a loop, here we check for an error condition, here's another loop, here we check check for another error condition. It makes it harder to see through all that and answer the question "what does this code try to accomplish?". At least for me, the more code there is, the harder it is to see.


Yes, Go is a low information density language. Haskell has high information density.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: