Hacker News new | past | comments | ask | show | jobs | submit login
More Good Programming Quotes (2016) (henrikwarne.com)
173 points by henrik_w on July 22, 2017 | hide | past | favorite | 99 comments



"Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration." — Stan Kelly-Bootle

Thanks for putting in that one in. Stan Kelly-Bootle was one of my favorite writers. He recently died, but he was a Brit who wrote computer books and folk songs.

I think people who dual-specialize, in something left-brained and something else right-brained, are better at both. I've only read parts of one of his books, a 90s book about UNIX. It was so clear and fun to read while seeming to break many rules about writing clearly. I'm not sure how it works.


0.5 would be good for anti-aliased drawing where you only get non-blurry lines at the 0.5's :)


I can already imagine the pain for when I'm trying to draw an un-aliased line.


I thought these two were pretty good:

“Bad programmers worry about the code. Good programmers worry about data structures and their relationships.” — Linus Torvalds

That's how I always feel when people have lengthy discussions about spaces vs tabs. Good truth coming from the man himself.

“Sufficiently advanced trolling is indistinguishable from thought leadership.”

Kinda scary true, when you see how some online communities, that started mostly as trolling, became real idiologys over time.


Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious. Brooks


Linus's quotation is similar to Wadler's law:

> In any language design, the total time spent discussing a feature in this list is proportional to two raised to the power of its position.

> 0. Semantics

> 1. Syntax

> 2. Lexical syntax

> 3. Lexical syntax of comments

http://wiki.c2.com/?WadlersLaw


And on that, I believe Linus uses 8-wide tabs. You are, apparently, doing it wrong if that causes problems.


Given the ease with which editors resize tabs, and given the problems caused by excessively nested code: seems reasonable.


I agree. I use spaces and 2 to 4 depending on typical language style but even then heavy nesting is obvious.

I have a project full of essentially 'if (true) return true; else return false;'. Sometimes heavily nested. It's just not necessary.

Guard-style returns go a long way but a long column of those may mask deeper problems of structure. I don't mean inheritance-polymorphism no-ifs but rather your code isn't dumb enough and perhaps your data isn't smart enough.


4. Format of whitespace


Very good, although I thought the dynamic typing one was a bit snarky. I'll happily give up dynamically typed languages like clojure when structural typing can give me the same flexibility.

I say this as a person learning idris and Haskell and loving it btw, before anyone prematurely tries to convert me to static typing.


This quote riffs off the point that the issues of type are there whether or not your language requires you to be explicit about them.

On the other hand, the empirical evidence for the effectiveness of static typing is weak. This may be because the latest developments in static typing have not found their way into mainstream languages, but that is only a conjecture.

You may prefer this quote: "Static types give me the same feeling of safety as the announcement that my seat cushion can be used as a floatation device." - Attributed to Don Roberts by Martin Fowler.


As a Haskell proponent I'd be really interested to hear specifically what it is that you'd like to do in Haskell but can't.


Write a program that anyone can read. Haskell inevitably ends up looking more like a mathematical equation than the simpler constructs people are more used to finding in languages like Clojure.


Haskell is very readable if you learn to understand the type declarations, which are nicely set on their own line above every function definition. Haskell allows very general type declarations, but ideally once you've gotten your program running, you go back and make the types more specific. Anyone reading it can then see exactly what the whole codebase does without even reading the functions.


I'd be even stronger. Haskell's the most readable language I've ever used, and I was an early (ish) adopter of Python mostly because of readability.


Can you link to a small Haskell code base or single file that you believe is "more readable" than any other language?


I'll link to some of my own code, since I know it better than any other

https://github.com/tomjaguarpaw/haskell-opaleye/blob/master/...

For someone who knows Haskell idioms this is extremely readable compared to functionally equivalant code in another language.


How easily extendable is that without the original source (e.g. when using as a library)? From what I can see it looks like if you can't edit the case statements you can't extend the functionality (e.g. expression problem stuff).

However, I'm not a haskell pro, so I don't know if I'm just missing the obvious here.


Very easy! And that's (one reason) why it's written the way it is. Any PrimQueryFold can be extended to a fold on a type which extends PrimQuery with additional constructors.


this is like 90% `data` and `case`. show me a language where type declaration and case statements aren't readable.


> this is like 90% `data` and `case`

Welcome to the world of Haskell! Sum types aren't even available in C, C++, C#, Java, Python, Ruby, JavaScript, ..., so I think you're just proving my point for me.



You can write unreadable code in any language. "Look, here's some unreadable code in your language" doesn't tell us much of anything.

That said... looking at your first example, it's really not that bad. It could probably be rewritten to be quite a bit clearer, but even knowing nothing about the library or surrounding context, it doesn't take me that much thinking to work out what's going on:

We have a dictionary implemented as a sorted list of pairs, and a list of records we want to update that dictionary with (after some transformation).

If there are no records, we give back the dictionary unmodified. If there is no dictionary, we build one out of the records. If there is both a dictionary and a bunch of records, we do a recursive merge of the two - compare the heads of each list, cons the smaller on the result of the appropriate recursive call.


> you cherry picked

That's exactly what I was asked to do!


For posterity, here's some very readable Haskell that has less data and case:

https://hackage.haskell.org/package/turtle-1.3.6/docs/src/Tu...


That may or may not be so (my contention is that it is not so) but that's another debate.

The question I'm interested in finding the answer to is specifically what important things you can do in Clojure but not Haskell because of Haskell's static types.


What can I do in Haskell that I can't do in Clojure, because of Clojure's dynamic typing?

If you want to add static typing, you're looking for a contract library. Some are runtime, some use the macro system to get compile-time checking. (In fact, core.typed looks almost exactly like Haskell type definitions.)

If you miss Haskell's pattern matching, you can find it at core.match.

You want monads instead of variadic functions? Nothing easier.

Laziness is a lambda away.

Clojure, and so many similar dynamic languages have the power to give you the safety of static typing, if and when you want it. But they don't require it of you.

In some cases this can be bad, when you don't think through your data structures enough.

In other cases, it can be good because you don't need to think as much as you put things together, and can go back and expand later.

I think every language has a time and a place, for the right purpose.


> What can I do in Haskell that I can't do in Clojure, because of Clojure's dynamic typing?

You are making a false equivalence here. This thread started with the implication that the flexibility of dynamic languages is more useful than anything static typing can give, and questioning the grounds for that is not the same as asserting the opposite.


I question whether one is better than another.

If Haskell's type system paths the way for the programmer to implement, innovate, and create, then there is no reason for change.

The same for Clojure, or any other.

I merely answered a question on Haskell's capabilities with a personal anecdote. Haskell has been hard for the teams I worked with, replacing it with Clojure enabled others to reach their potential.

But, as for my personal belief on whether static or dynamic typing is better? Different projects have different requirements, as do the hands that craft them.


> What can I do in Haskell that I can't do in Clojure, because of Clojure's dynamic typing?

Also an interesting question, but, again, not equivalent to the one that I asked! joncampbelldev very specifically said that Clojure gives him more flexibility than Haskell. I'm asking for specific examples, otherwise how else can I improve Haskell or its documentation to be more appealing to fans of dynamic languages?


Ah, I've said too much in such a debated area, and clumsily.

Dynamic types give their power in what they can change. I do not know how easily this can be done with Haskell, but I expect a different pattern, or a more convoluted approach would be needed, but something like this:

    (defun greet
      ([:bob] "Hey bob")
      ([x] (str "Greetings, " x)))
So far, something familiar. But where's the dynamism? Let's grab a macro, so that we can do this:

    (addmatches! greet :chef-matches {:before :beginning}
      ([:emeril] "Love the zest")
      ([:child] "First, we baste the chicken!"))
Which we can have later use in other conditionals, as well as the corresponding macro removematches! (These macros come from a library, but are not difficult to implement).

Why would one ever want such a thing? It allows for the creation of a self-modifying parser, for one. Removes a lot of boilerplate for another.


I'm afraid I've never been able to understand macros. What does this do? Is it something like

    greet :: String -> String
    greet "bob" = "Hey bob"
    greet x     = "Greeting, " ++ x

    addmatches :: (String -> String) -> String -> String
    addmatches f = \case
        "emeril" -> "Love the zest"
        "child"  -> "First, we baste the chicken!"
        x        -> f x
If so, you'd have to design it slightly differently to do removematches in Haskell, I guess.


Well, it modifies greet in place.

And this:

    greet :: String -> String
Wouldn't really be writable by a human, because:

    x -> f x
Can represent any two expressions, for example:

  (addmatches! greet :chef-matches {:before :beginning}
    ((list? x) :sym) (comment Boolean -> Symbol)
    (list (apply greet x)) (comment Function -> Recursive...))
addmatches! isn't a separate function. Basically, at compile time, a macro is given the raw untyped tokens from Clojure's parser, and can run all of Clojure's features against them. The *match! macros don't exist at runtime.


Readability is nice, but it is not the most important thing; I think what matters more is being able to reason about whether it is correct - there's lots of ways code can look about right on a first pass while hiding subtle bugs. I do not know whether Haskell is actually superior in this regard, but it seems to be motivated by the right concerns.


From the type perspective, one challenge is processing data where you don't know and don't care what types and what structure parts of that data will have.

One domain is deserialization of json or xml in a way that's agnostic to the structure of any extra data that I'm not explicitly referencing. For example, if I want to take a structured message (json or xml), do computation on parts of that message (so I need to decode these parts to "normal" data structures), and create a modification of the original message where my results are added but everything else is unchanged.

The last time I had to do this it was a bit painful as the serialization/deserialization libraries (I don't even recall which I used in the end. aeson? I tried a bunch.) expected me to know/define the whole structure if I wanted to properly serialize the updated data afterwards, and would break whenever the an updated api started returning a different message structure than expected (e.g. added new fields).

Perhaps I didn't find the right tools/approach to do this, but it's a rather common need and doing the same is quite trivial in python/javascript/etc.


Be able to clearly reason about the space complexity of code.

Even quite short parts of code can have an unexpected (at least unexpected to me) explosions of thunks, and it's quite difficult (at least to me) to find out where/how strictness markers should be added so that the code wouldn't generate the totally unneeded gigabytes of temporary data that I see in the memory profiler.

It's quite easy to reason about the correctness and results of Haskell code execution, but it's hard to reason how exactly that code will be executed in ways that have an extreme effect on performance - not tenfold, but e.g. exponential relative to dataset size.


Another worthy topic of debate that is completely unrelated to my question. Types and laziness are independent -- at least to the degree needed to provide an answer to my question.


> Types and laziness are independent

I think they are directly linked. If I have to worry about arbitrary types, my code can never be robust for the environment it runs in without raising some generic exception. This is the lazy approach. In a dynamic language, everything can be boxes to a static list of flexible types and is by default, meaning my code is succinct. This doesn't address the question (which is a strawman), but is the heart of the debate about static vs dynamic typing. Getting things done faster (with some guarantees) vs many guarantees (but not all) at a slower rate of progress (because you have to essentially duplicate code at a higher level, that the dynamic interpreter is already doing).


I can't actually understand anything you've said. You are using this definition of lazy, right? https://en.wikipedia.org/wiki/Lazy_evaluation


That's not true. When the language is lazy, you can't give a different type to one expression that's lazy and one expression that's strict. You can write hacks into the language that tell the compiler to evaluate certain expressions strictly, but you can't capture this in the type system when the language is lazy by default. On the other hand, this is perfectly possible in a strict language.


Firstly, that falls under my "at least to the degree needed to provide an answer to my question" rider. Secondly, no you can't type the values but you can type the consumers of the values, i.e. the functions: http://h2.jaguarpaw.co.uk/posts/strictness-in-types/


Ah, didn't realize the original question was limited to comments on dynamic typing. Regardless, that's a clever article, and an interesting tradeoff.


Exploratory coding that doesn't require tree-shaking refactors when a type needs to change.

"The spec changed, and I now have to be able to handle geese, where I originally typed for ducks."

There are ways and tools to minimize the impact, but it's still a lot extra overhead to think about and deal with.


Aha! The first reply that actually provides an answer to my question. Many thanks. Could you say any more about this, or perhaps provide an example? Its hard to know what you mean.

EDIT: Attempting to answer my own question, perhaps it occurs when you define a product type with a constructor

    data Foo = Foo bar baz
and then you want to add a new field to Foo

    data Foo = Foo bar baz quux
All your explicit uses of Foo in construction and pattern matching have now broken.


Open type variation require existential types. Not the easy types people are used to.


Could you explain what you mean by "open type variation"? Google's throwing up nothing relevant ...


I think it refers to open unions, which are types specified with some constructors that might have additional constructors defined later. As you say, Google is curiously reticent, but see, for example, p. 60 of http://www.seas.upenn.edu/~cis500/cis500-f06/ocaml-book.pdf .


Sorry for the confusion, JadeNB is correct: open type unions. For when you want plugin-like code without significant recompilations.


OK, fine. Well in Haskell we have Dynamic. Of course matching on a Dynamic may fail, but that's OK because the basis for comparison is untyped languages! Matching on types can always fail there too.


With Dynamic, yes, but not with existential quantification. The type is a proof the type-class is implemented.

I also didn't yet know Dynamic, thanks. I'm not that fluent in fp.


polymorphism?


I don't think that can be it. Polymorphism doesn't require existential types!


Closed polymorhpism can always be pattern-matched but it requires modifying all matchers. Open polymorphism is the default in OO interfaces: you can always implement a new type by adding code locally.

Since existential types can be monomorphised and compiled to binary code allowing type-safe plugins.


Yeah, I was referring to polymorphism as we have in OO, where old code can call new code. I didn't even know "closed polymorphism" was a thing (is it equivalent to algebraic types?).


I'm not precise anymore with jargon concerning polymorphism (it's all the same to me), which I probably shouldn't do. I consider two factors: first closed/open (or finite/arbitrary) of 'remembering' an erased type, and second the sugar for nested 'remembering'. Almost all languages have both (open and closed) as either built-in, as a library or as a design pattern for that language. It boils down to either a tagged union or a function pointer. And both can be transformed to the other if needed.

Assembly, C have tagged unions & function pointers. Java has visitors, instanceOf, reflection. C++ has visitors, boost::variant, dynamic casts, mach7. Haskell has ADTs, pattern matching, existential quantification, Dynamic. Rust has pattern matching, traits, Any. Python, javascript, lua, lisp are dynamic, it's easy enough to create both.


I've not found an elegant way to express in Haskell what can be expressed trivially with modules and functors in SML/OCaml.


Another interesting point that is not relevant to the question I asked, which was about dynamic typing!


Some of my favourite quotes are from Alan Perlis's Epigrams on Programming [0].

My (current) favourite: "A programming language is low level when its programs require attention to the irrelevant."

[0] - http://www.cs.yale.edu/homes/perlis-alan/quotes.html


My favorite: "Everyone can be taught to sculpt: Michelangelo would have had to be taught not to. So it is with great programmers."


God, what bullshit. Even Michelangelo started with the basics and had to put in a huge amount of effort to master his craft. He had to learn.


> “When your hammer is C++, everything begins to look like a thumb.” — Steve Haflich

As someone who works with C++ every day on my job, this made me laugh out loud. I think I'm gonna write this one on our whiteboard on Monday.


I like this quote: "'Even a single quoted word can be a "double-edged sword"' she said. 'You can't "escape" that'. He didn't."


Thanks for posting. This gave me a good laugh:

– What do we want?

– Now!

– When do we want it?

– Fewer race conditions!

@wellendonner


That made me belly laugh


Focus is a matter of deciding what things you’re not going to do. (John Carmack)


Harold Abelson — 'Programs must be written for people to read, and only incidentally for machines to execute.'


"When stuck debugging, question all of your assumptions about the program."

A shower thought. Helpful if the program does something other than what you thought it did, allowing you to figure that out quicker.


More a reflection of insufficient logging telemetry at higher inspection (debugging) levels.

If you have the above and still can't figure it out, maybe you're looking at too much. Sometimes it can help to isolate which stage things appear to go off the rails and then turn up the detail when you've got an idea where to start the search.


Perlis's Epigrams are my favourite. http://www.cs.yale.edu/homes/perlis-alan/quotes.html

"When we write programs that 'learn', it turns out that we do and they don't."

I still wonder what he means when he says: "Everything should be built top-down, except the first time. ". What is different the first time that warrants not to do it?


In my experience with my own projects, building top-down only works if you already know how the lower-level components will be structured. Else you'll inevitably end up overengineering the lot.


The first time you don't know what the top looks like.


Domain knowledge?


Many of these are good. I agree with joncampbelldev. The dynamically typed quote is quite snarky.

To that I'd reply: Statically typed languages are when you have to tell a computer that 2 is an integer and 'two' is a string.

I know this argument goes back and forth and never ends. And I'm not trying to start a flame war.

But I will say this: type systems occupy a space on a spectrum in my opinion, and the spectrum is imperative vs. declarative languages.

It's a conceptual spectrum, of course. And the difference is between telling the language exactly what you want and how you want it vs. telling the language what you want to get.

On one end you have C# as the avatar of strongly and statically and imperative languages. In the middle, you have something like Python that is strongly typed but dynamic, and at the far end, you have SQL, which is as strong or weak as you define your tables as well as entirely declarative. You can choose your own adventure with databases.

You have some bad citizens like JavaScript and PhP, but the conversation is really about the use-case.

IMO, strong typing is far more important than static typing. Because I personally can't stand silent errors. Weak type systems are the root of all evils. Anything that can fail silently is dead to me. Fuck you, <language>. Tell me if something broke. Don't just quietly do something I don't expect.

Which approach to use to me depends on the team and the size of the project. You can run with Python for small teams. You really can't for large teams. You really shouldn't use C# for rapid prototyping or small teams.

But you absolutely should a statically typed language for large teams.

Edit to put the sharp in my example of statically and strongly typed language.


You are making a fair comparison of dynamically typed languages (Python, Lisp, Clojure, ...) to badly statically typed languanges (C, C++, C#, Java, ...). What's missing from your analysis is a comparison to well statically typed languages (Haskell, OCaml, F#, ...).


I thought I was talking about the principal of static typing, rather than any language's implementation of it.

Back to my original point, if you have tell your language that 2 is an integer, well, that is a certain kind of issue with that. And if you don't have to tell it that, then there's another issue that happens.

I don't know Haskell or OCaml beyond toy programs, and I'm willing to hear things beyond my knowledge. But as far as I do know, the point doesn't change.

I'm happy to eat my words and learn.


The GP's point was that "good" statically typed languages have type inference and indeed do not need to be told that 2 is an integer and "two" is a string. It is pretty orthogonal to the question of dynamic vs. static typing.

In C++:

    auto i = 2;
    auto s = "two";
Scala:

    val i = 2;
    val s = "two";
C#:

    var i = 2;
    var s = "two";
Rust:

    let i = 2;
    let s = two;
and so on. Note that these are not dynamically typed bindings - you cannot later assign a string to i, or an integer to s.

However, these are examples of local type inference where the compiler simply deduces the type of a variable binding based on the type of the bound expression (although Scala and Rust type inference is actually somewhat more general than that). Haskell, on the other hand, has global type inference where the compiler considers the whole program when assigning types to bindings that are not explicitly annotated. However, some arguably useful capabilities of a type system such as subtyping and implicit conversions make global type inference a much more difficult problem to solve.


I do not know of any language in which you have to declare that 2 is an integer. I do know of a few that require you to have stated whether the variable you are assigning it to is to be regarded as an integer or a float.

This is not a good example, as it is unreasonably biased towards making explicit typing look like pointless bureaucracy. A better example would be "2". Or the struct / tuple { 2, 2.0, '2', "two" }.


> if you have tell your language that 2 is an integer, well, that is a certain kind of issue with that.

The question is what size integer - that matters, no matter how much our programs attempt to abstract it away. And the value 2.0 is an integer as well, but no program will consider it to be an integer, instead considering it to be some approximation of the integer value of 2.


> I thought I was talking about the principal of static typing, rather than any language's implementation of it.

Then may I suggest expanding your mind by learning about good implementations of static type systems? Any of Haskell, Ocaml or F# would be a good place to start.


But C isn't the best example of strongly typed.


Sorry about that. Edited to say C#.


How do you know 2 isn't a string and two isn't an integer representing, well 0 of course :-)


"I've been using Vim for about 2 years now, mostly because I can't figure out how to exit it." @iamdevloper


1. The bug is not in the section of code you’re looking at.

2. Rule #1 is of no practical use.

Philip Roe on debugging CFD codes


"The double is cast." -- Julius C'ster


"I came, I coded, I crashed." -- Julius C'ster


[Warning: spoilers ahead]

> "Debugging is like being the detective in a crime movie where you are also the murderer."

So, essentially Memento?


Given how so many programs operate (little global state, minimal information passed between blocks of execution), there are definitely some parallels.


"I think that I can safely say that nobody understands template mechanics." -- Richard Deyman


"Great, just what I need.. another D in programming." -- Segfault


"That which _can_ be configured _must_ be configured. Corr: defaults aren't.

(mine, inspired by WIndows 1.0 and it's only got worse since).


Why oh why do fans of agile programming and dynamic languages always hide behind snarky epiphets from other people?

This imbalance is, in my mind, the greatest reason why people still prefer C and Java over Clojure and other hipster languages. The feeling that those people must be overcompensating for something, just look at how they feel the need to talk down on others all the time.

In the left corner, you see snarky hipsters complaining about how all security problems would go away if only everybody stopped using C, and if people dropped their methods and adopted agile / XP / scrum / other fad of the week.

In the right corner, you see C programmers writing code in the waterfall model.

I'm much more attracted to the right corner. Not because C or waterfall are great. Because of the (to me) juvenile unprofessional behavior of the people in the left corner. You see, I'm a programmer. I'm attracted to people who are programming. Not to people who are telling others how not to lead their lives, while appearing to produce scant noteworthy code themselves.

Compounding this is that OP is leaving a link to his own blog here. And this blog post is basically "here are some things other people said that I happen to agree with". I learned nothing.

BTW: The very same behavior makes me think highly of postgresql and less highly of nosql databases. My instinct tells me to trust people who don't feel the need to trash-talk others.

EDIT: to stay with the theme of the post: I always liked "Don't learn the tricks of the trade. Learn the trade."

EDIT: To win me over, don't tell me my stuff is bad. Show me that your stuff is good.


The only quote in there regarding dynamic vs static is one disparaging dynamic languages. Also, the complaint about C is about it being too weakly typed, not about it being statically typed.



I don't think so, as none of those used "snarky epiphets from other people".


fefe23 said "hide behind snarky epiphets from other people" (my emphasis) suggesting that the criticism of dynamic programming was discounted because it was snarky.


That's not what "hiding behind X" means. It means to use X to distract people from yourself / your own positions. So here hiding behind scary epithets would mean using snarky epithets to counter criticism of dynamic languages.


I know that's not what it means, but in the absence of knowledge regarding whether fefe23 is a native English speaker I think the interpretation I gave of his/her comment was a reasonable one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: