Hacker News new | past | comments | ask | show | jobs | submit login

There are two ways to deal with bugs. Correcting them, and avoiding them. ... Of these two, the only one that is significantly influenced by the structure of programming languages is avoidance.

This is an interesting statement to me, and one that I think may be telling towards the author's experience as a programmer. This is not my way of saying he is a poor or inexperienced programmer! I skimmed through a few other articles on the site and what stood out to me is that he is heavily "academic." Some of the best programmers I have worked with had deep academic experience and were quite adept at writing fast, concise, correct code in many different languages.

What stands out to me is the absence of something that (as is my personal observation) most experienced programmers in non-academic settings begin to realize at some point... While avoiding bugs (by, for example, choosing a language with strict type-checking, or immutable data/vars) is quite important, writing code that can be easily corrected is arguably just as important!

In regards to just the use of a programming language, this comes down to choosing a careful balance of constructs - syntax, features, naming, etc. Coding standards for a project or organization are not just there to stoke somebody's ego or cater to the "lowest common denominator". Well-chosen standards and conventions are there to simultaneously avoid bugs and make code easier to debug and maintain - which is what one must do to make corrections!

It is therefore crucial to insist on it when choosing (or designing) one.

This sentence is what really drove me to comment: No programming language can prevent all bugs. Not even most bugs.

In practice, I have even found that constructs and limitations that are intended to prevent bugs of one type can lead to bugs of some other type. In the worst cases, these limitations can lead to programmers making code that is much more verbose and complicated, which, of course... leads to more bugs that are harder to correct.

IMO, one should choose a language based on many, many more criteria than "avoidance of bugs." Personally, one of my top criteria is to choose a language with which those who will write and maintain the software (now and in the future) are going to be most productive.




> writing code that can be easily corrected is arguably just as important!

The easiest to correct errors are the ones that are detected as early as possible.

> No programming language can prevent all bugs. Not even most bugs.

Sure. No programming language can prevent you from misunderstanding a specification. But some languages can prevent you from missing out corner cases in your case analysis, or from redundantly writing overspecific code that works in essentially the same way for a wide range of data types.

> In practice, I have even found that constructs and limitations that are intended to prevent bugs of one type can lead to bugs of some other type.

I have no idea what you are going on about. What kind of bugs do features like algebraic data types (Haskell, ML), smart pointers (Rust, to a lesser degree C++), effect segregation (Haskell) and module systems (ML, Rust) lead to? I can only see the bugs they prevent.

Normally, the kind of feature that "leads to bugs of some other type" does not try to prevent bugs in first place, just mitigate their consequences. For example, bounds-checked array indexing does not try to prevent programmers from using wrong array indices, it just turns what would be a segfault into an IndexOutOfRangeException.

> Personally, one of my top criteria is to choose a language with which those who will write and maintain the software (now and in the future) are going to be most productive.

I see writing buggy code as negative productivity. So a language that gives you the illusion that you are writing correct code, when you in fact are not, actually makes you less productive.


Probably my favourite open-ended interview topic for programmers is asking them to rank various properties code might have in order of importance, and explain why they chose the order they did.

For example, one possible list of properties might be conciseness, correctness, documentation, efficiency, maintainability, portability, readability, and testability.

Often, I can learn a great deal about what sort of person I’m talking to just by watching them define their terms, decide what assumptions they think are necessary, and then reason through the resulting dependencies.

I get the feeling that the parent posters (hercynium and catnaroek) might argue for quite different orders, but both with good reasons.


Over the years I've come to the conviction that the 2 most important properties of programs are simply 1. Correctness 2. Maintainability

A program that does not do what it's supposed to is of little value. This is a relative metric however, a program can do many valuable things right, yet have a few bugs.

But once a program does what we want, what else do we want from it? We want the ability to change it easily, so it can do even more things for us.

Maintainability is also a relative metric, and even harder to quantify than "correctness". However when looking at two ways of writing a specific part of a program, it is often easy to say which produces a more maintainable solution.


Many good candidates seem to anchor on those two properties (correctness and maintainability) as a starting point. More generally, they tend to identify that some of the properties are directly desirable right now, while others have no immediate value in themselves but are necessary to ensure that you can still have the directly desirable properties later. Which ones take priority under which conditions can be an enlightening conversation, often leading to related ideas like technical debt, up-front vs. evolutionary design, and so on.


Bingo!


Our backgrounds and experiences may well be very different, leading to starkly different opinions - but, well... that's programming!

Though I don't know Haskell, ML, or Rust, I do know Java, Clojure, C++, and a few other languages quite well.

In a nutshell, I like type-checking and smart-pointers, and module systems and such. They often save me lots of trouble! But not all the time. Sometimes, these features get in the way - either to making the code more flexible, or to making the code more readable. (casts dirty look at Java)

But at the end of the day, no matter how precisely one can communicate to the compiler, and no matter how good one's compiler is at detecting (or even correcting) inconsistency, flawed logic, corner-cases, over-specificity and under-specificity (or the reverse of genericity, take your pick)... We puny-brained humans still manage to screw things up. The compiler can't read the spec, never mind actually cognitively understand the problem trying to be solved... so it's up to us to attempt to translate.

And fill in the inevitable gaps.

And attempt to anticipate future needs.

And finish on-time.

With our pitiful, buggy, meat-based processing organs.

Sure, "readability", "maintainability", and "understandability" are all pretty subjective, but that's part of the point. With the languages I've used, I find that sometimes the "safety mechanisms" get in the way. Working around them with "design patterns" tends to add code that might not otherwise be necessary. And might contain more bugs. I'm sure I'm missing something life-changing by not knowing Haskell or ML, but more important to me is that the code I write be, well... "readable", "maintainable", and "understandable" by whomever might be working on it (fixing, extending, whatever) next!

'Cause experience has shown me that bugs happen. And I want finding and fixing them to hurt as little as possible - whether it be me or some other poor fool.

Oh... also, I couldn't help but have a snarky response to your last remark: (no offense intended :)

> I see writing buggy code as negative productivity. So a language that gives you the illusion that you are writing correct code, when you in fact are not, actually makes you less productive.

FTFY: I see writing buggy code as inevitable. So a language that gives you the illusion that you are writing correct code, (by sucessfully compiling) when you in fact are not, actually makes you less productive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: