Hacker News new | past | comments | ask | show | jobs | submit login
What most young programmers need to learn (joostdevblog.blogspot.com)
203 points by GarethX on Jan 5, 2015 | hide | past | favorite | 106 comments



> Code in comments [...] When asked applicants are usually well aware that commented-out-code is confusing, but somehow they almost always have it in their code.

To my experience this is a common symptom of not using a version control system. You change a line of code, but you want to be able to undo if it doesn't work, possiblay half an hour later when you changed other places of your code (so your editor's undo is of no use).

I also see this from people who are using VCS, but that's mostly because they are overcautious and/or have not really gotten "warm" with the VCS.

A final, very small percentage are familar with VCS and confident with their code, but leave that commented stuff in there accidentally. The usual cause it that they have not (yet) acquired the habit of reviewing the diff before committing.


I am guilty of commenting out code when I am working on a problem, but before committing I remove it unless it is the simple version of some function that I made way too complex in order to improve performance. I find it easier to understand later what the complex code is supposed to be doing if I have the simple implementation in the comments.


I suggest you move the simple version into a unit test case. Making sure the simple version and the complicated one have the same results.

Plus I like to add some perf test that shows the complicated version is faster than the simple one that I can rerun after major upgrades to the lower level system e.g. when upgrading from java 1.5 to 1.6 I could remove some complicated code because the simple one was then as fast due to an improved JIT.


Yes I also think this is what I should do to, unfortunately when I first wrote the code I spend most of my time working on I didn't write unit tests at the function level (all my tests are at the module level). I have been meaning to get around correcting this, but the task is now so huge I fear it would take me more than a year to write all the unit tests :(


IMHO every automated test helps.

Don't aim for 100% test coverage aim at using unit tests to develop faster and more efficiently and the test coverage stats will come automatically.

I personally need to improve my TDD, but I am quite happy with my test driven bug fixing approach. When working on a bug I first turn it into a JUnit or selenium test, and then fix the bug and make the test pass.


I wouldn't call the simpler implementation "commented out code", as it's doing what the best comments are doing, making complex code more readable.

Having random scribbles, alt solutions and other "shitty" comments while working on a problem is definitely not a sin if you clean them up before pushing them forward.


No... you are making your code hard to understand to others by doing that. Thats if you check it in like that. Nothing wrong with leaving it there during development.


I disagree. If I see 25-40 lines of something that LOOKS like it should be simple, it's tempting to replace it with the four line version. A comment which says, "You might think this would be better as {{4 lines}}, but it is too slow... because this/that/etc" can be VERY helpful, so that one doesn't reproduce the problems (and the time fixing) that caused the initial un-simplification in the first place.


I do explain in the comments why the simple function was removed and why the complex version is being used. The biggest problem I have found over time is divergence of the simple version from the complex version. I try and avoid this by updating and testing both versions, but whenever you have code duplication there is a risk that they will diverge.


Thats totally different than just leaving it there without any context at all. Which is what i was referring to.

An actual comment is of course helpful, but when people check in code that has been commented out it usually has no explanation at all. Thats the confusion part.

I'm talking stuff like this...(and I believe so was the author)...

//int foo = 10;

int foo = 20;

if (bar)

//if (foo > 0)

{

foo++;

}

//else

//{

//foo--;

bar = false;

//}


Maybe for just a small code segment, a comment is a lot more convenient than going all the way through git for it?

What if there's some functionality in a method you're not sure you want to take out or change or not, but you've also added to the rest of the file, so you commit it with a small code segment commented out and a short explanation? That's not hurting anything and it'd be a PITA and overkill to put it in a different branch or make a separate commit only including it or something.


Maybe for just a small code segment, a comment is a lot more convenient than going all the way through git for it?

I don't get this; stashing changes, branching, even just copying and pasting something from history in git, are all very fast for me (using git in emacs with egg). Granted, I can comment-dwim with M-;, and I've conditioned myself against "just" commenting out code, but still.


> I also see this from people who are using VCS, but that's mostly because they are overcautious and/or have not really gotten "warm" with the VCS.

Yeah, that could probably describe me. Reverting a particular piece of code back from VCS seems like more work (the kind of "I need to think about it" work), which is enough for me to leave bits of code commented when working on some particular task. I never leave them there for long - usually only until I can get the new piece of code working and tested. The commented code serves as a reference/fallback.


This is what I do as well. I use VCS but it's more work. If I'm just refactoring a piece of code I comment it out while I work on the new stuff and test it. It also has the added benefit that I can reference the old code easily if needs be. Occasionally I will commit with the commented code still in place (until I'm 100% confident with my new code) but I usually follow that up quite quickly with a commit removing it.


I'm roughly in this boat as well, though I don't want to be.

I'd rather be intimate with git.


> I also see this from people who are using VCS, but that's mostly because they are overcautious and/or have not really gotten "warm" with the VCS.

I think that's partially because many developers will "learn" a VCS when working on a school project or hobby project by themselves. For a personal project, often all you really need is dead-simple versioning, and even then you may complete an entire project with just a linear series of commits on a single branch. It's easy to avoid really learning a VCS in that environment.


Code should exist in comments when it's going to be uncommented shortly. This, because 1) reaching into the VCS is extra work compared to uncommenting something already present (think cache vs disk), and 2) more importantly, it is code that anyone making changes nearby should be aware of.

Code in comments may also occur as a symptom of poor VCS habits, but that's not always the case.


Yeah... I need to stop doing this. Hangover from pre-VCS academic code-soup habits.


Another reason for keeping the commented-out-code, could be if reverting to a previous version of the code is more work, than checking in a new version. Or when the old commented-out-code is used as an explanation as to why the current code looks as it does.


We have a saying over here - TWGIF[1]. Any time someone checks in commented out code, they get chastised.

1: TWGIF - That's what git is for


I've gotten in the habit of adding trailing whitespace when I comment out code or do any printf debugging. It's still quick, but then git yells at me if I try and commit it.


That's an interesting way of dealing with it - I have my text editor strip trailing whitespace and ensure there's a newline at the end of the file on every save, so it wouldn't work for me.

That's okay though. I visually review the diff before I commit (almost) every time.


We use version control yet there are people I work with that constantly check in commented out code. Even when asked to stop. Its sloppiness and not paying attention to details or caring.


What I've been surprised to learn after leaving school is how important it is for code to be stylistically sound: programmers should be as thoughtful about the structure, readability, and conventions of their code as professional writers are of their prose.

This was something that didn't dawn on me until reading Matz's philosophy of "programmer happiness" when designing Ruby. Code seems sterile, something that you just put into place in a logical manner until it works...in small class projects, you don't realize the mental toll it can be to read through a messy code base...But being unhappy about a monolithic, massive codeebase is easy...the problem is tha when working on small projects, you don't notice the mental tax of unclear code that can drain your happiness and productivity.

It took a long time of professional coding for me to realize how much thoughtful design of code could make me a happier coder, just like the adept use of language contributes to happier communication in all other areas of life


I'm a junior programmer still so I'd like to share my thoughts regarding this post. I'd actually like to say that I've been aware of all these (code commenting, incorrect function names, etc.) and the thought of changing them is always "it takes up time" or "you don't know if it's worth the time." But I'm always asking my superior that specific question: "can I get rid of this?" Unfortunately, wherever I've worked, I haven't been working under very experienced people either so they've responded to me with the simple "uhhhh..." and given me no answers. In the end, I end up faulting into habiting into this article's main points.


Just a few points...

I'm happy you are not afraid to call yourself "junior". Humility goes a long way to becoming a great programmer.

Asking permission is obeisance but it is also communication. Whoever you asked may have knowledge of dependencies upstream or downstream. By asking, you gave that person the chance to add a little context to the change you propose to make and, quite possibly, prevent the team from working overnight to correct an issue you created. Even with 25 years in the field, I still ask other engineers about changes I want to make. Quite often we talk through any ripple effects before I set hands to keyboard.

Be especially careful in modifying interfaces, library calls, etc. Basically, any module that other systems depend on should be modified with a lot of caution and testing. Your caution should grow at least linearly with the dependency graph.

Over time you will develop a sense of which changes you can make without asking, which changes you should talk with others about, and which ones to stay clear of.

Your primary responsibility is creating good software. Your second responsibility is staying employed. Don't fret over every oddity you see in a code base. Refactoring is good for all the reasons people listed in the comments. Figure out which ones have the highest value and talk with the other developers about them. ("Live to fight another day")


Why do you ask for a permission?

If I am working on a task and code got a bit too messy for my liking, I'd simply refactor, it's simply part of the task. Working != complete. If something goes wrong, it will be easy to revert given it is committed in a sane way (i.e. not loads of unrelated work under one commit).

It's a bit different if you want to do major refactoring (taking multiple days). But with small ones, just do as you go ;)


As a junior programmer in the contracting world, I ask the same as it is also a business question. You do not want to create so much extra work in the process of refactoring that you wind up losing money or running out of time before creating what the client asked for. Asking the super (who is also tech savvy) allows him to worry about balancing the business with the needs of his code base.


I would submit that being a junior programmer in the contracting world is a nontrivial part of the problem. Contracting relies on a lot of really bad incentives that I personally suspect are largely antithetical to good practice.


Yeah, I think the only time you need to ask permission is if the refactor would cause additional regression testing. Depending on the system, and unit test coverage, this may not even be a big deal, but sometimes, if you have a dedicated QC department, they may want to regress parts of the system even if they are covered by unit tests. Such regression could mean dates slip, etc, etc, and perhaps that's why someone might say "no."


Yes, just like cleanup is part of construction.


I find that it's always worth the time. Even more so when I'm working on a quick fix code someone else wrote. I kind of have a few reasons for it:

1. Making the code cleaner is reducing technical debt. This will help in the long run, and I just sort of do it as I go. It's like tidying the house - it is more efficient to do little things regularly than to have a giant mess to deal with later. The little things make it easier to do something later, because you don't find yourself yak shaving as often to get to the main task.

2. In my personal and observed experience - doing the fixes at first seams like heavy task. "it works, this is just grunt work", but as you get practice at it, you'll find that it helps find bugs, it becomes a habit, and you just do it as you go.

3. It helps me understand the code better. Even if I wrote it. Just because I got it to pass tests doesn't mean I know why at first - but cleaning up the code makes me realize edge cases and what's happening.

One thing that helps is to keep in mind the axiom "all code sucks. Some code is useful" (to paraphrase a famous saying). I know when I code stuff, I tend to do a throwaway implementation or two first, just to wrap my head around the problem, then keep one. After a few months, I'll revisit and understand even better what to do, and rework it again. Then maybe it's decent.

I'm sure there are people out there who can do it great the first try, but they aren't that common. The secret is to not wrap your ego/identity as a programmer/understanding of accomplishment up in the code you've written, but in your ability to solve or lessen the problems that come up.


The short answer is "yes, it's worth the time".

One big improvement I've made in my code over the years is that I take the time to improve my code. Continuously improving code quality does lead to high code quality, and high code quality reduces maintenance hours (both amount needed and time spent).


Well, it really depends on business objectives here.

If you're writing the code which will live at least a month — definitely. But if you're writing a quick hack of a project or test that you're know won't be around next week, it's often a real waste of time and effort.

And I'm not writing about that theoretically now: for me personally, it's a real problem. Whenever I sit to write a simple dirty hackey thing, I always find myself a few hours later googling for the best way to implement unit tests for this particular case or something like that. Which is sometimes educational, but is very distracting.


The problem is that one-time hacks have a nasty habit of living on and becoming long-lived systems. And it can be hard to explain to users / business owners that just because it appears to work doesn't mean it's "already done".

Writing quick hack code is a good thing if you are in an organization that is disciplined enough to throw it away after you've learned from it.


One practice I personally follow with my team is to always pay attention to when the code you're reviewing is not obvious. e.g. If I have to spend more than a few seconds trying to figure out what a function does, then you're pretty much going to have to either rename it, rewrite it, or (not ideally) do a better job at clarifying through comments.


I think part of the point of this article is to say that your transition into "seasoned developer" status should include developing your own opinions and solutions to the code smells discussed.


I disagree with his last point. There was once a time when I was obsessive about not copying pasting code. Over time I eventually learned that sometimes it better to just copy+paste a code snippet a few places than being dogmatic by making sure that nothing is ever duplicated. My rul of thumb is that if there is branching then don't copy, if there is no branching, then you can copy+paste it.

This snippet should only live in one place:

    if (something) {
        for (blah blah) {
            something_else();
        }
        do_something();
    }
on the other hand, this code can be copy+pasted as much as you like:

    do_something();
    do_another_thing();
    another_call();
some dogmatic programmers will want to place those last three lines of code into a separate function and instead copy+paste that. But I've found that doing that sometimes makes the code more harder to read.


what i understood as the author's issue with copy paste was the knowledge that changes are expected

if you copy:

    do_something();
    do_another_thing();
    another_call();
a few times in your codebase, but decide that you want to alter that a bit:

    do_something();
    perform_new_feature();
    do_another_thing();
    another_call();
you can hurt yourself by forgetting that that alteration needed to be done to each instance

just easy to maintain if you do this:

    function dodoan() {
      do_something();
      do_another_thing();
      another_call();
    }
then copy paste dodoan() as many times as you want

though i do agree all rules have their exceptions and should be handled in such a way that allows for future alterations stead some obsessive mindless adherence


Yes in that case it's a win, but it becomes an anti pattern when you sometimes want to perform_new_feature() and sometimes you don't. I've seen it happen where you end up with many functions like:

  def a_and_b_and_c() {...}
  def a_and_b_and_c_error_checked_internally() {...}
  def a_and_new_and_b_and_c {...}
  def a_and_new_and_b_and_c_error_checked_internally() {...}
  def a2_and_b_and_c 
And so on. Basically you are just putting a memorization task on calling the names rather than using the underlying bits well. Learning the balances around this is one of those "craft" bits of programming.


This is a great discussion! I've always thought that after the basic "how do I even do this at all?" concepts of programming, the next most important concept to learn how to wield and to perpetually sharpen is the "don't repeat yourself" principle. Huge swaths of software engineering literature is dedicated (explicitly or implicitly) to when and when not to apply that principle.

Your parent's comment addresses by far the best argument for the principle, which is that any time something is likely to change in a snippet, copy-pasting should be undertaken rarely and thoughtfully. With the recognition that nearly any given snippet is likely to change in some way, we can conclude that nearly any copy-paste is "bad".

Your comment is then a great argument against the thoughtless application of the principle, recognizing that yes, granted, nearly any snippet will change, but the required changes may well not be uniform.

Personally, I think it still makes sense to pull out shared behavior for the period of time in which it is shared. During that period of time, all you have is a guess that the code might eventually diverge. If there comes a point in time when you do want the code to diverge, it is straightforward to copy-paste it back out, if branching or creating a similar method with the differences is a worse option. On the flip side, if you haven't pulled out the shared behavior, and you discover that you want to change it everywhere in the same way, it is far less straightforward to go find all those places, or even be aware that you need to do so.

Another point is naming by purpose rather than behavior. To continue your example, it makes sense to ask why one method checks internally and the other externally? What different purposes do the different checking styles have? If there are good answers to questions like that, the methods can be named better, and it becomes less about memorizing name than understanding when to use which.

Of course none of this is at all black and white, and I think you're spot on that this is one of those craft bits, and among the most important!


One thing to keep in mind here... some of those "you can't predict" things are utterly predictable in light of experience. That part can't be faked to well, there aren't a good set of rules about it. But experience is a good teacher, and you do eventually learn to be reasonably accurate about when to do C+P vs DRY, even if you can't explain the why of it.

I think it comes down to this - when I first learned to code, it was hard enough to keep track of a couple different functions. As experience came, what I could track and reason about and keep in my mental model grew, so now that I've got some experience I can see more of how the whole system will grow and interact. I'm not really smarter per se, but rather I just have more practice as putting it all together.


Couldn't agree more. It's definitely a function of the extent to which you're guessing, which is a function of your experience solving very similar problems.

I do think we (programmers? people?) have a tendency to avoid inspecting the experience we already have. We thought hard about some pattern a few times and came up with some instincts based on that experience, which is great, but we should make sure to double-check those instincts from time to time.


It's actually pretty well predictable if you think about it in terms of encapsulation and core functionality. If this functionality is an external requirement to the purpose of the components, then it absolutely should exist as its own component that coalesces that functionality into a single location. However, if that functionality is core functionality to the components, and they just happen to do the same thing in the same order... Then that's where you'll eventually see divergence if you try to link them to a third component. In those cases, you should either be trying to join the two components into one, or just happily leave them as separate components that just happen to perform some common operations by way of their primary function.


In at least two of your examples you could pass in an error handling function (or class) and it would still be a net win.

Analyzing this further would require having a more fleshed out example.


Inexperienced programmers sometimes can have a dysfunctional, almost worshipful, relationship with complexity. They're entering an industry riddled with complexity, most of it way over their heads, and they're in awe. So when they write complicated code themselves, they almost take it as a point of pride. I wrote this massive, tangled 300-line function that actually works, by god.


Right, the best code strives to be dead simple, almost boring. It's not like poetry; it's not more fun when it has hidden obfuscated meaning behind it.


From someone who started as a developer only 2 years ago, I found the stuff about how to properly design classes to be the hardest part. There is not enough touted articles on it on the internet.

For example, how would a developer know to break [insert functionality] into a separate method? It is not always obvious to junior developers to break a method into smaller chunks, especially when they understand every line of code written. They usually recognize the problem once it is pointed out, but it doesn't often register beforehand.


And knowing when OO is not the right solution to all problems.


In this case, I'm using classes very loosely - the same applies to methods, and when to break them up into smaller methods.


I believe spacecowboy_lon is referring to functions opposed to methods. For those that don't know the difference a method is basically a function that has access to the state of an object (correct me if my definition is wrong or sucks).


A common approach is to describe the functionality aloud. If your description includes the word 'and', then there may be a problem ("this method increments the foo counter, and makes the tea").


I don't think there's anything wrong with any method that also makes me tea :P


It is the minute details that are sometimes the problem. For example, if a developer is iterating over an array to do something to each element, there is a good chance that that iterator should be separated out in the event that it may be reusable (if using something like map).


I'm just not sure you can teach these things to a junior programmer, and actually expect to see results. A baby takes 9 months to develop, you can't expect to see finger nails develop until they've developed fingers.

As a junior, you're probably spending most of your time (and a lot of it at that) just trying to make the dang code work. Making sure it's written in a way that other people can maintain it etc is secondary, because who cares if you can maintain it if it never even worked in the first place!

When working with a new junior, It's important to expose them to existing (good!) as well as bad code bases. At the same time, let them write new code on things that aren't super business critical. You have to accept that it's going to take them some time to learn, and you should just work with them until they don't have to think about breathing any more.


Yes, new programmers spend a lot of time just getting things to work.

But, from years of managing summer interns, the biggest surprise to new programmers is the amount of non-code stuff that needs to be done alongside actual coding. Doc, comments, peer reviews (and the associated rework). The sooner they learn this is part of the job, the sooner they'll fit into the team's workflow and contribute at a high level.


Fair enough, that's very true.


>>> I'm just not sure you can teach these things to a junior programmer, and actually expect to see results

You can't expect to teach them and see results immediately. But teaching them (and supervising / reviewing code) would certainly help to make the learning period shorter.


The biggest challenges for me as an intern were reading and understanding someone else's code (Python) and understanding the problem domain quickly and sufficiently. It's hard to judge how deeply you need to understand something before you program around its concepts. Of course ideally you're an expert, but when you're potentially only using something for the summer it makes sense to somehow seek an understanding of the salient concepts without getting overwhelmed by details even though it's sometimes the little caveats that can make a tremendous positive or negative difference in your design depending on your knowledge of them.

And I'll admit that self-discipline was sometimes an issue. When you take responsibility for a complex project when young and inexperienced, you feel a lot of pressure to perform and get it working so you can tackle the next task. Maybe us noobs aren't used to this sort of pressure?

I had the chance to do a code review with a few of the top engineers in the company, and it was tremendously helpful. We focused on a small part of the code that was responsible for a performance bottleneck, and I didn't realize how sloppy some of my code was until we walked through the function. There were really silly redundant things that I thought I would never have written in my right mind. I was trying to optimize a function where some of the simplest code was redundant and actually contributing a bit to the performance problem. They were nice about it and said it was not unusual and that code reviews are great for spotting such things.


Well-organised code with sensible, correct naming of things will save you years of development time during your career.


True story. I could not tolerate most programming without consistent conventions to guide me. Pretty much every programming language has had a meta-language of convention that went with it, to support learning and readability.


One thing that should be mentioned here is that the amount of time that one has to learn to write code really depends on the company one works at. If you work at smaller firms, its much harder to justify putting in extra time beyond just making it work (although this bites you in the long term by making the codebase uglier). In the larger firms, you have much more time to do code reviews and refactoring.


I've stopped worrying about duplicating code that is less than one line long, and I'm a happier person as a result. It's a slippery slope that eventually you stop writing code that actually does anything and are just writing layers of indirection. Similarly, not all string values need to be named constants, because not all string values are intended to be changed.

(edited)


The insight that "knowledge" is not the the problem, but "self discipline" is an interesting one.

I made similar observations and came up with a list of "coding commandments": https://larsxschneider.github.io/2013/08/25/ten-commandments...


I have committed all of these mistakes at one time or the other and I am quite sure that I slip from time to time esp with regards to #1. But one thing that I frequently do is read my own code after writing a certain amount. It may be after writing a module or after re factoring it or when I don't feel like writing any more. And reading my own code helps me uncover a lot of inconsistencies in it.

I think I started reading after @edw519 said something like he reads his code everyday before going to sleep. While during writing my focus is on making the thing work when I am reading I am more critical of my code. Not as good as having a review but still helpful.

Also I think reading code of other programmers is a good exercise. Reading different styles of books is already recommended to writers and I think that programmers can extract same benefit by reading code of different programmers. If nothing else at least it will develop your debugging abilities.


Well said! A similar thing I do is to read the diff of what I've written so far before making a pull request, as if I were a reviewer. I've caught many mistakes and "TODO" comments that way.


Young programmers should first learn to _READ_ and understand code.


You mean getting used to other programmers' idiosyncrasies and deciphering them?

You will dissuade them from ever taking up this profession with that attitude.

I was attracted to this because of the opportunity to create things, not because other people wrote stuff and now I have to read it.


I disagree thats what the GP meant. Too often people just hack away without truly understanding what they are doing. Then you end up with trying a million things until finding something that works and not knowing why it works. If you take a few minutes to read the documentation and understanding why you are doing something rather than "I tried it and it worked" you will become a better programmer.


The Perl Cookbook was one of the best technical books I read in my early career, specifically because it said, "No, look, there are about four ways to do this right, and here are their tradeoffs" for multiple problems. Articulating that concept, and the ability to look at alternative implementations that each might be good, helped me a lot.


The following are a few of the most common issues that I see in code written by young programmers. These are also issues that experienced developers can help new developers fix by

1. Lack of knowledge of the business domain leading to an inability to understand the high-level, conceptual view of a system

2. Poorly named variables

3. Poor code organization

4. Loose cohesion in objects and functions which in part flows from bad naming

IME, these issues can be fixed on an accelerated timetable when experienced developers help mentor younger ones.


Poor database schema design. If you get this wrong it makes your life a lot more difficult and it is harder to fix. These bad models often outlive the applications themselves. You see these around years after they were made.


I deal with this every single day. The errors made in poor schema propagate through the rest of the application.

To use a metaphor: It doesn't matter how delicious of an apple you have (the data), what truck you use to transport it (the back-end code), or how nice of a store display you put up (the front-end code), if you don't store them properly along the way (the schema).


Scheme.

As still the best "small" language to teach fundamental principles (everything is a first-class value, symbols are references to values - naming, procedure composition and nesting as the basic building block, ADTs, immutability of the data, evaluation strategies - eager and lazy, and what is meant by "mostly functional language", etc.) and shapes of data structures (list, three, table).

It will pay back with any "stack" or a "framework".

Haskell.

To learn that static typing done right (type inference) is a very clever feature, but it catches only simple errors (it cannot catch flawed logic or wrong abstractions), so it is not a silver bullet, and, perhaps, to realize why "extremes", like "pure-functionality" or "lazy language" are rather unnecessary complications than big gains. And that monads are mere accidental, awkward ADT to ensure an order of evaluation in a "lazy language", where it is undefined by definition.

After that one would find everything in industry is rather easy and boring and develop a healthy aversion to Java and other "packers" stuff.


Disagree about monads. What you mean is that IO actions are an awkward way to ensure evaluation order in a lazy language. In Haskell, IO actions form a monad, but this is only reasonable because so many other things form a monad that the language has special support for them.

Monads are great in that disparate things like lists, sets, IO, control flow, state, optional values, and even functions are all instances of a single ADT which is useful enough that one can write reasonable code abstracted over it, and Haskell provides the mechanisms to do so.


> one can write reasonable code abstracted over it

Don't you think that at least in a strict language, this would be rather over-abstraction or abstracting for the sake of abstraction?


Not really. I think you're confusing Haskell's purity, which makes it necessary to write monadic code, with its monad syntax, which makes it easy.


I think I am not. Monads has nothing to do with "purity" - Erlang is a pure-functional language but there are no monads.

Monad make sense only within a language with Normal (instead of Applicative) order of evaluation, to ensure that one computation (or action) "finishes" (being reduced to a value) before another (>>= and >>). return is for the type-checker.


That's a very specific way of thinking about monads. Sure, they enforce ordering in a language where you may want to do that. But they're also any ADT equipped with a generalized map (that we happen to call >>= or bind) that commutes nicely. I don't think about my list operations using bind as about ordering computation; I think about them as transforming data in a way that is a little harder to do with more traditional list operations. Same with set, maybe, etc. The ordering feature of monads is more about IO actions than it is about monads in themselves.

All I'm saying is that monads are a useful abstraction regardless of whether or not they are used to encapsulate effects. I use Traversables, which have a bind operation, all the time in Scala, and it is an effectful language.


I agree with everything you said here; I particularly agree with the part where you say that after knowing Scheme and Haskell everything else becomes easy and boring. Does it ever. Makes me question why I got into programming now that I see the bigger picture of what I have been diligently trying to master for over 15 years.

To the beginner I'd say: study Scheme and Haskell. Understand them. Once you do, look at other languages. Everything should be familiar. Now you have to decide if you want to spend your professional life always learning different ways to do the same few things there are to do that you learned from Scheme. YMMV but yes it gets boring and yes it will burn you out.


Maybe it's like cheating in video games.

Maybe we should all play the normal game instead of turning on god-mode Scheme?

I'm not going to lie: I don't want to be bored or burnt out.


I like the metaphor and think it's on point.

It's as if you found a way to make cash appear from nowhere. It would be hard to find a reason to work for money.

Same deal with Scheme. It lets you do anything, easily, but it makes it hard to find a reason to want to do anything.

To be fair I wouldn't blame it on Scheme, more on my own brain, but still, beginners should look at Scheme right away so they can get to the "I can do anything, now what do I want to do, if anything, with this tool" part faster.


Something I've learned over the years:

"A little discipline now will save a lot of discipline later."

This goes for anyone in any walk of life and in any profession.


It's hard to emphasize enough how important naming is. The author accurately describes the connection between class bloat and imprecise naming.

One tip I'd offer is: You are not inventing something by naming it, you are describing how it's used (based on its behavior).

If it's hard to name, one possibility is that it is not designed properly or that it's doing too much.


Expected to see a long list of algorithm design books, but was pleasantly surprised.

"In this case however it all still makes sense to be in one class, but the class simply grows too big." - would be nice to see some examples, I bet anything can be split in a nice way.


In my case I see this when I treat a class as a namespace and fail to follow the single responsibility principle.


I'll add to the list:

- The inability of younger pears to ask for help or otherwise design reviews before they write the code.

- Super hero programming which most of the time boils down to the above and various forms of pseudo-optimisation that are difficult to read. E.g. messing with inheritance and directions: UpwardEngine --- inherits ---> DownwardEngine, instead of using a BaseEngine, because it's saves one class definition (and improve dispatch performance...)

Another advice I would give to young developpers, is that getting as much as possible bits written helps getting better. The thing is that reading code to refactor and debug given the chance to do it right is more difficult and more rewarding in terms of skills.


My opinion is that most of the bad habits of inexperienced programmers are just manifestations of more general youthful hubris and inexperience in the nuances and complexities of life.

I'll add to the list the most extreme form of 'not written here syndrome': 'I didn't write it syndrome'. The programmer sees themselves on missing out on the fun of solving a task by using a library (or reading the relevant framework docs). When experience teaches you that the real misery comes down the line when you need to support your hand-rolled physics engine lacking in tests...


How does one keep these things in practice in the reality of an over-committed software team with shifting priorities, difficulties interfacing with product management, and bad specifications/requirements that eventually result in significant scope creep? I often feel like I want to refactor code, but the cost of doing so is so high that it would slow me down to the point of missing a deadline. "It works" is usually as much as I have time for.

Maybe I'm just not a very good programmer...


Instead of trying to dedicate time specifically to refactor try to clean up classes as you come across them when fixing bugs or adding features.

Whenever I open a file, I try and take a quick glance to see if there's anything that could use refactoring before working on the actual issue (if it's either a particularly large file or one that hasn't been touched in years I'll check Sonar[1]).

[1] http://www.sonarsource.com/


One I see regularly is not testing code or writing tests that are useless. To some extent, it's related to the mistakes mentioned in the post: messy code is hard to test.


I agree that these are things programmers need to learn. I think analyzing code (as in debugging it, learning to read what someone else wrote and understand it), are equally important. I learned no debugging while getting my degree. (Or rather, I learned no debugging during school hours. I learned plenty working on my own projects during that time, though.) I see this with junior programmers we hire now, too. They sometimes don't even know what a debugger is!


The Clean Coder is a good read for starting programmers. It goes beyond code to what it takes to be a software professional.

http://www.amazon.com/The-Clean-Coder-Professional-Programme...


As well as the extensively reviewed, recommended, and appraised "Code Complete" [0] by Steve McConnell

I'm working my way through it now (1+ year of professional experience) and it is a magnificent way to improve the quality of your code. I read it off and on, my goal is only 40 pages a week so that I'll make sure to find the time to do it (I'm doing a masters program and enjoy living in NYC too so setting huge goals doesn't work well for me).

Every time I crack it open, I find myself inspired to write better, clearer, and more concise code. Sometimes you just need a nudge to get back into doing things you already know you should be doing.

Finally, constantly learning, I think, is the best way to become a proficient, and then skillful professional software engineer. Many programmers become proficient and then level off. And that's good enough. But if you truly wanted to become one of the top 5% in your field you need to do something called deliberate practice. Reading 'Talent is Overrated' [1] really exposed me to the theory of constantly challenging yourself in order to grow. I really recommend it, I find myself trying to apply the theories to all areas of my life.

[0] - http://www.amazon.com/Code-Complete-Practical-Handbook-Const...

[1] - http://www.amazon.com/Talent-Overrated-Separates-World-Class...


Excellent. I'd also recommend "Code Simplicity: The Fundamentals of Software"

http://www.amazon.com/gp/aw/d/B007NZU848?ie=UTF8&redirectFro...


This list of bad behaviors isn't limited to 'young' programmers either. Which is really sad.


This. I see devs that have been programmers for 20 years and still can't name their methods properly. I see most of the things the article talks about daily.


So true. When I was a junior, code quality was the first thing I was taught, in the first week.

While the code worked reasonably well, there was no indentation, spacing, comments and variable names longer than one letter (perhaps because my first language was GW-BASIC).


That was often done with GWBASIC to fit the code into the small amount of memory.

Anoter trick was always to manualy unwind any loop counters if you had tp break out of a loop as the GWBASICS had a memory leak.


I keep hunting for such a post specifically for an iOS Developer.. to get beyond the initial frameworks and learn processes and flows


Everyone hates to do it, but a lot of this comes from reading code. The more code you read, the more you'll learn to understand idioms, styles, what makes things good and bad for understandability. Then you'll be able to do it yourself.

Another point to make: while every language has it's own idiom and style, there are also lots of "clean code" lessons that apply across languages, so guides about, e.g. python, translate somewhat into objective c.


Reading code is a good start, but I've found that it's not enough. When faced with an unfamiliar concept, I'm not sure whether it's an unfamiliar design pattern, a hacky kludge, or a special case to get around some quirk for a particular use-case.

I've found that reading books—such as the Pragmatic Programmer series, Effective Java, and others—helps me to understand the rationale behind what otherwise seem like strange (sometimes even boneheaded) design decisions. Even oft-derided patterns (such as the "FooBuilderFactoryBuilder" so often seen in the Java world) make sense when understood in the context of the problems they're trying to address.


It depends on what code you read, though. Think of PHP devs that learn from WordPress, for instance. It's not exactly setting a great example... (Though to their credit, WordPress improved somewhat in recent years.)


On the one hand, sure this makes some sense, on the other, it maybe means read more code... (i know this sounds sort of snarky). For example, if I wanted to write a novel, and had only read say, _A Tale of Two Cities_ I might thing I needed to include a bunch of broad social statements, and make my point through strongly contrasting scenes and juxtaposition.

If I had only read Dickens, I might think writing a novel entailed finding only strange characters, pointing out social flaws. Keeping a thematic style that was about confusing light and dark with the normal associations. And so on.

If I had only read Twilight, I wouldn't care much about dialog or character development. I wouldn't understand that there are things you can do thematically without exposition.

If however read Dickens, and Hemingway, and Tolkein and Dan Brown, and Stephanie Meyer and ... I would have a different understanding of what could go into a novel. I would be able to see where story arc intersects with bigger themes and character development. I would see different ways of structuring sentences, paragraphs, chapters, and even whole books.

None of this reading of course will make me a great author, writing does that, but a wide exposure will certainly help me understand where my writing is working vs where it isn't, it will help me understand how to structure things, help me shape my own work.

I consider the same to be true of code. The folks that only read WordPress code have a very limited understanding of possibilities. The folks who have read that, and rails and django, and jekyll and flask and ... will see a wide range of styles, ideas about structure, and so on.

An aside: Wordpress has some pretty ugly parts, but there are some ideas in there about structure that I have always liked. Particularly considering it was designed and written during the "explore and figure out what works" phase of web apps, when the industry didn't really have a "best practices for the web" that included lots of experience with what does and doesn't work.


Check http://www.objc.io/ for good pointers for iOS and Objective-C related things. I haven't done iOS for a year/year and a half, and I still read their posts.


To code in mIRC. At least that's what I did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: