Hacker News new | past | comments | ask | show | jobs | submit login
Signs that you're a bad programmer (2012) (yacoset.com)
289 points by theunamedguy on March 8, 2015 | hide | past | favorite | 170 comments



One of the unstated, or subliminal purposes of these kinds of ideas is to act as an emotional "vent." Its social purpose in the programmer community is to present a target for the release of "justified" emotional aggression.

Other fields and other kinds of social groups have them in their own contexts.

One of the things that such sets of ideas produce, in sufficiently large populations and sufficiently complicated contexts, are false positives. That is to say, you will tend to hastily pattern match your object of justified scorn. You will tend to detect more of them than actually exist. (This is how human beings are wired up.) What's more, you will conflate highly salient attributes with actual symptoms of such. Being female, or a member of a certain ethnic group, or being "too old" then becomes a sign of the insidious danger.

The thing to remember, is that extraordinary situations require extraordinary causation. Such individuals do exist, but there must be something or someone very powerful propping them up. They could be very good at manipulating people. They could have been granted a sinecure by someone very powerful. Most often, they are the unwitting beneficiaries of pathological organizational policies.

So how is a diligent rationalist supposed to approach this? Remember that you are wired up to produce false positives. Be self aware of your social animal instincts that will tend to have you use the subject of your suspicions as an object of scorn. Look diligently for "extraordinary" causation.

Most of all, don't judge prematurely.


Yep, agreed. There's an abundance of this kind of scornful thinking in the tech world. I figure it's mostly a subliminal attempt by programmers to convince themselves that they're smarter than other people.

That being said, the article seems to be pretty well-written and does have some good info about anti-patterns once you get past its title. Also, it proposes remedies to these anti-patterns and bad habits which are useful.


I figure it's mostly a subliminal attempt by programmers to convince themselves that they're smarter than other people.

That's when you've let your social animal instincts get the better of you as a sentient lover of knowledge. What will truly distinguish one as an exceptional programmer is an ability to shed prejudices, analyze, and listen well, not being on top in the geeky version of the primate dominance game.


I upvoted you because I suspect that your first two sentences are true.

You completely lost me with the extraordinary causation/situation tangent, though I'm not sure if it is because I disagree with you or because I am misunderstanding you.


There's definitely a profit motivation for a company to not pay $80k a year while not getting anything for it. Something must be going on.


One time, I was helping a friend of mine work on Javascript (js) skills. We cycled through all the basic sorting algorithms to practice js best practices and help with understanding computation complexity issues.

On one sorting algorithm, Can't remember which one ATM, involved recursion. It was pretty simple. My friend couldn't understand the nature of the problem because he didn't have a instinctual comprehension of recursion. I threw together a Visio of the process and walked him through the blocks one by one. He still couldn't wrap his mind around it. He started to berate himself and I could tell he was giving up on his dream. I told him to keep trying. To keep spending the time on places like HackerRank. He did.

8 months later, he landed a job as a js full stack programmer. Not a senior level position, but a damn good one. I'd seen his work before his interview and it was good. He had a command of all the major frameworks and he was observing all major js design patterns while avoiding idiotic anti-patterns. His code was clean and easy to read.

IMO Pointing out that someone APPEARS to code like a loser is a petty playground tactic of someone who has a serious ego issue. Taking someone by the hand and helping them through the hurdles so they can become a better programmer is the solution. I can't imagine how my friend would have reacted if he had read this petty article while he was trying so hard to become a decent programmer. He may very well have given up. That's not acceptable.

There are only people who give up and people who refuse to concede defeat. Ranting on a website about bad programmers is giving up.


I find the article really helpful, to be honest. I am not a good programmer (it's not my job, I am a physicist); my code runs and does all I need such as automatically control my instruments so I can do someting else, evaluate my data and make plots. I can also use other people's code, though I sometimes code by google.

Since code quality is by no means a qualifier for our work and most people don't really care, it's hard to become better beyond just knowing the languages better. Programming patterns? Ha, I just use all languages the same way. This article tells me where my weaknesses lie, which helps a lot, because I can only google things I have some notion of.

It's not just ranting, there are detailed lists of what could be not-good and how to fix it. The only thing potentially offputting is the title, really. (Might help that it's not my job to be good at coding, just a bonus.)


I might be punished for this, but it has to be said. First, I agree that learning can be a challenging process and helping someone may make a difference between a drop and an accomplished person. But then, I wonder your friend finally comprehended or not the recursion when landed on that "damn good one" job of "js full stack programmer". If he still doesn't, chances are other important concepts eludes him too, and the pain of his limited understanding can be felt only by his fellow colleagues, not by you. His code may be clean (which I personally appreciate), but that is rarely enough by itself. What is or isn't acceptable needs to be judged by the task at hand, and as sympathetic as one may be, red lines must be drawn all the time (and that is what the article actually does). For average JS implementation tasks your friend may be OK, but you really wouldn't want to encourage him (just out of moral sense of fairness) to get into say, some life-critical systems development without the necessary competence.


I agree. There are plenty of very good programmers who never went through formal training. They had to learn somewhere... first copying code, then modifying it to their needs, then finally understanding the concepts enough to write it from scratch.

I personally remember the first time I used a recursive function for a fairly complicated use. It broke my brain for a while, but I thought about it for several days, stepping through all of the iterations until I had something that worked very well.

If I had been told I was a bad programmer because I was having trouble at the time, I probably would have given up.


I'm not a big fan of the language used, w.r.t. the dichotomies between a "good" programmer and a "bad" programmer. There are certainly challenges for junior developers that are mentioned in this article (and I agree with most of it), but I wouldn't consider those people who do it "bad", rather "inexperienced". This may be a mere nuance, but it's not something that should be perpetuated. We all have our challenges and we are all learning to do better in our work.

In reading this article, I don't think any developer purposely does any of these things and if they do, they aren't "bad programmers", but just developers who are still learning.

It seems that the intent of the article is good-natured in that it's pointing out certain issues with code and how to remedy them, but calling people bad at their craft so that they read the article is a bit strong.


I would prefer to "inexperienced" as well.

Some junior engineers are apparently talented. They would make some of the mistakes. Even if people point out the mistake to them, they might not fully get the reasoning behind the better approaches. It is intrinsically part of the growth, to experience, to learn from mistake, to understand deeper.

Some junior engineers are lack of potential. It is a fact, in every profession. They are not good at programming, but might be good at other things. If one can afford, one can help them to gradually transit to other positions, like product managers, data analysts, etc. It could sound stunning. Some of them are actually good materials to be managers.

One of the best product managers I worked with was an engineer before. He said he was simply not good at it, and not interested, either. His contribution as a product manager was much much higher than as an engineer.


My team leader doesn't think in sets, and it gets a bit frustrating when he describes queries like and imperative program with lots of ifs, and no order to them.


There are literally thousands of programmers out there slaving away in big IT departments who literally can't manage anything more than conditionals, for loops, assignments, etc. Given sufficient time, they can get a pretty complex system to be minimally functional or maintain software written by smarter people. It's overly charitable to think everyone is smart enough to be a goood programmer.


As a non-programmer who likes to work on one-off projects or personal projects, I completely agree with you. If someone is still doing some of these things after 10 years of experience in the industry, then yes, they probably are a bad programmer. But if they're new, this can just be chalked up to a lack of experience. I think this relates to the industry's general tendency to shut out anyone who "doesn't know what they're doing". The language used to answer some of the more basic questions on Stack Overflow is a perfect illustration.


Very true, we're all "bad programmers" in comparison to the programmer we'll be in three years :) (well if we're passionate about programming at least)


Congratulations on fomenting that impostor syndrome.

While the the text contains valid anti-patterns its style is non-helpful. People should not be put down but encouraged. People will notice sooner or later if they are not fit for a field by themselves. The only benefit of berating someone for lack of skills is a temporary catharsis for the one doing the berating while the person at the other end pays a psychological cost of a kind or another unless he's really mindful.

Berating command-voice is excusable in a heated argument but maintaining this voice throughout a long writing is stylistically a very poor choice, IMO. This just discredits the authors voice as that of an infantile jerk.


The problem is the article contains actually useful advice for newbie/intermediate programmers, but it is framed in such a way it mostly appeals to experienced programmers with a superiority complex.


If you're experienced but not superior, you haven't been learning anything from your mistakes along the way. I thought the article was extremely nostalgic. An experienced programmer who can't spin up a funny story about the time he learned his lesson about bounds checking or implementing regexs the hard way is either not really an experienced programmer, or what is known as a "liar", or most politely has an extremely bad memory.

As a concrete example of #2 poor understanding of the programming model, in the old days we had a saying that you can write fortran in any language, and it was not exactly a compliment. I guess a modern politically correct analogy would be the ability to write perl in any language. Any programmer who hasn't made the mistake of writing code style from their language X in their new language X+1 is either inexperienced or outright lying or at best is merely forgetful of doing it.

The article is poorly formed in that the phases explanation from the second "programming model" discussion actually applies to the entire article, with most of the article being examples of "phase 1". If you're at "phase 0" it could be helpful to see a roadmap, and if you're at a higher phase its going to be somewhat nostalgic.


Thanks, you expressed the point I wanted to make much better than I did :)


From the follow-up[0]:

""Bad programmer" is also considered inflammatory by some who think I'm speaking down to them. Not so; it was personal catharsis from an author who exhibited many of those problems himself. And what I think made the article popular was the "remedies"--I didn't want someone to get depressed when they recognized themselves, I wanted to be constructive."

Which I think is fair enough.

Besides which, there's plenty of feel-good 'you can do it!' material out there. Some of us prefer a more vigorous pep-talk.

Frankly, my impostor reflexes were far more triggered by the 'interviewing is broken' posts on here in the last couple of days; I've got two literary degrees and a big vocabulary. I interview well, despite the fact that I know (counts) two algorithms. I am haunted by the idea that it was my vocabulary that got me hired, and not my engineering potential.

YMMV, etc.

[0] http://www.yacoset.com/Home/signs-that-you-re-a-good-program...


In short this article looks at aspects of programming and says if you're bad at each aspect you're probably bad at the sum of them. I'm not sure what the purpose is... you can figure out more directly whether you're a bad programmer by comparing the code you produce to that of your peers.

Aside from that the made up terms are annoying. It would be more readable if he used

  voodoo code => dead code
  bulldozer code => high cohesion subroutines  
  pinball programming => unreliable code
I can't think of anything good to say about this article.


> I can't think of anything good to say about this article.

Yeah, it certainly reads as very smug (at least to me). I think it's an attempt at humour that just comes off as condescending; after all, we've all been beginners at some point. I don't think it's very useful for 'bad' programmers either, because it doesn't go into much depth or explanation, so really what's the point? To make us all feel good that we don't do these things? (some of which are debatable anyway)


I think 'voodoo code' in this case means something a little bit more than 'dead code'. It means dead code that is still maintained and updated because whoever's working on it is unable to recognize that it's dead.

I'm also not sure that it's sufficient for most people to just compare the code they produce to their peers. Part of being a skilled programmer is the experience and knowledge necessary to properly evaluate the quality of code. Inexperienced developers may find the work of more skilled developers to be confusing, in which case they might not be able to glean any useful insights from it, or they might consider it excessively fancy and obscure. On the other hand, the good code might be easy for even inexperienced developers to understand (I'd say that in many cases that's part of the definition of good code), in which case the subtleties that make it good might go unnoticed.


Pinball programming is not sufficiently described by "unreliable code". Windows is the ultimate pinball program. If it's not working what do you do first? Reboot.


> 5. Difficulty seeing through recursion

It may be a function of the code I work on but I generally avoid recursion unless I'm really sure about the bounds of my input, which is rare. Few other things can take a smoothly operating service and hard crash it because the input got just a touch too big. Worse yet, there is usually no warning as the problem approaches.

Yea, some languages have tail call optimization, but I've found it to be picky even in languages that have it. i.e. an otherwise innocuous refactoring to a method to make it look a bit cleaner, but logically identical, can all of a sudden crash the process due to TCO not recognizing it anymore. This is also the type of thing that doesn't show up in testing, but shows up immediately under production load. Recursion can make some algorithms easier to follow, but I'll take a slightly more complicated algorithm over being woken in the middle of the night due to a 'stack level too deep' exception.


There is a whole field of algorithms that are best described recursively where the recursion depth is guaranteed to be logarithmic. Sorting algorithms like merge sort are a basic example. Using recursion for those is fine.

Apart from that I tend to agree with you (for the vast majority of programming contexts).


> Sorting algorithms like merge sort are a basic example. Using recursion for those is fine.

General sorting is exactly the type of thing I never use recursion for. Merge sort is O(n log n), sorting 100M records with a recursive implementation of it can blow the call stack in no time, depending on how you used recursion.


What exactly are you referring to? log 100M is roughly 27. A call stack depth of 27 has not been an issue on any general purpose system for a very long time now. Granted, there are certainly some niches left where you'd want to avoid that, but those are clearly exceptions.


I really want an IDE that puts a little yellow warning thingy on the screen whenever the compiler can't TCO a function.


Clojure has a form of this in that the 'recur' special form is used for tail-calls and self-calls are used otherwise. Using 'recur' from other than the tail position is a compile-time error.

http://clojure.org/special_forms#Special%20Forms--%28recur%2...


As someone who does not (yet) do recursion and looping-by-recursion in his sleep, I deeply appreciate Clojure's hand-holding to make TC situations completely explicit.


Scala has something like that (@tailrec makes it an error if the annotated function isn't TCOed), but I find TCO can make the failures worse. StackOverflowError will hit my high-level generic exception handler, retry that task appropriately and eventually fail it out using my normal failure handling. With TCO that thread instead just spins forever.


I know scala has a @tailrec annotation which will fail to compile if the function is not tail recursive.


I agree with other's statements about not judging prematurely. I feel like every software engineer I have ever run into (including and especially myself) has done some or all of these things at one point or another. We all have bad days, we all have poor judgment sometimes. Giving people the benefit of the doubt in cases of lapsed judgment seems like it would create a far less hostile work environment. If it's systemic, it's one thing... but let's try to avoid a witch hunt mentality. Everyone's going to have off days.

The only one whose legitimacy as an antipattern I would question however would be this one:

>"Bulldozer code" that gives the appearance of refactoring by breaking out chunks into subroutines, but that are impossible to reuse in another context (very high cohesion)

I think it is useful to break up complex logic into discrete methods that represent some logical step in the overall algorithm. If they're private methods, I feel like that can add to the readability of the algorithm rather than having it be in one monolithic chunk. I've seen two camps on this - keep it inline vs. make private methods for discrete steps, and I find the latter to be more readable when I look through code (my own or someone else's).

There is a limit to that, of course. If you end up with a tangled tree of methods, that could work against readability... but even then, if you took that tangled tree and made it a single monolithic method, how often would that actually end up being more readable?


It depends. If the code doesn't not end up indented and reads like a script, I prefer to read one long method, rather than jumping around a hundred small methods and trying to keep the overall context in my head. I think this preference applies more to higher level scripting languages and not all the time.


Not sure about the article, since it doesn't seem to say anything useful, but one problem I'm having with beginners is that beginners are extremely superficial in analyzing a problem and extremely superficial in coming up with solutions and have to be told what to do in very precise words and even if you give them a list of things to do, you still have to review their work because they might have missed some of the bullet points.

This happens, I believe, primarily due to lack of experience - they haven't suffered from superficial understanding of a problem with clients rejecting their solutions, they haven't stayed too many times at 2 a.m. fixing production issues, the haven't witnessed the proverbial big ball of mud coming to life because of patched up solutions and they haven't had a project that had to be rebooted in part because of technical debt created by them. We as humans, only find out that fire burns badly once we get burnt a couple of times.

And as a senior I'm completely out of my league when it comes to educating other people - I'm totally incapable of compressing 5 or 10 years of experience into a couple of months worth of education. And for example I have initiatives, like lets do presentations on Fridays, lets do code reviews and so on. And we do them for a couple of weeks, but then we observe that nothing changes and that our productivity dropped, because communication represents points of concurrency in the development process and concurrency represents the death of parallelization, which is incompatible with really harsh deadlines. And because these deadlines are a never ending story and the seniors are overwhelmed, management kicks in to "solve the problem" and they start thinking of hiring more people, which really means even more rookies to train, because good people that have the required experience are rare. And the problem to be solved by our project is so complex, that only seniors with a deep understanding of the problem can come up with working solutions for the core pieces, the number of seniors isn't growing and these seniors also have to train people and to continuously fix the mistakes that rookies did. It's enough to say that it's turning into a nightmare.


And I bet management have no thoughts about giving you a pay rise....


Heh, that's an interesting discussion - management never has any thoughts about giving pay raises, you have to ask for it, sometimes brutally.

Developers should learn a bit about marketing, have a couple of valuable things on GitHub, have a blog, have an up to date LinkedIn, etc... then you end up being contacted by recruiters, which is cool for an ego boost and as a safety net. It's then much easier to go in and say "Hey Bob, I'm producing twice the value you expect for this salary, work conditions have changed, so I want a significant pay raise".

And sometimes you have to threaten with resignation, which doesn't really work in big companies, because there you have the "cog in the machine" mentality, with developers often being the bottom of the food-chain. But in smaller companies or when working as a consultant or in general when the "stock holders" have good visibility in the value that you bring, it's a very effective tactic.

It's also very healthy getting a pay raise especially when the stress levels are going through the roof, as it keeps you sane, because what's worse than being stressed about deadlines is being stressed while you're not being payed enough, i.e. compensation is a hygienic issue and when working in unhygienic conditions, there's nowhere to go but down. But you know, fortunately we live in times in which the supply and demand ratio is favoring us :-)


My list is a lot shorter than this screed. A bad programmer is one who is passionless. No concern over how poor his work is, who doesn't want or care to improve.

For me, defining a bad programmer is as simple as this.


If you're good and work for a reasonably large organization, you'll be thrown into a bunch of projects. It's reasonable to be passionate for some and completely dispassionate for others.

Any project that spans over multiple stages (requirements gathering, design and architecture, coding, testing, release, bug fixing, maintenance, support) has some exciting stages (mainly design and coding) and some less exciting, especially when you're brought in to fix somebody else's bugs or work on somebody else's mess. It's also reasonable to not be super-passionate about such "opportunities".


There are passionate programmers who are breathing fire to accomplish something but who have none of the talent. While it's a mandatory quality passion alone doesn't make a good programmer.


If anyone hasn't read it I highly recommend the book "Antipatterns" (http://www.amazon.com/AntiPatterns-Refactoring-Software-Arch...

... or just google antipatterns.

My personal pet peeve is that many programmers don't refactor nearly often nor thorougly enough. It should be natural part of the developing process. Especially with large programs you reach certain points where all encompassing refactoring is needed to simplify the code, yet many power through leading to a bloated mess.

I've found its not uncommon that even decent complex code can be shrunk 70%-80% using refactoring, code generation and config files.


As a programmer, uncertainty is my second greatest obstacle. Second to my ability to fit large problems in my head. Uncertainty comes from not being able to fit the entire problem in my head and being able to compare the different ways problem can be solved.

With experience comes the ability to assign create new kinds of heuristics for various problems, which makes it easier to fit them in my head as I work. Reasoning about them, given reasonable simplifications, require experience. Lots of it.

And while acquiring experience you make mistakes, costly ones. You might put lots of time into it without really being able to solve the right problem. It can make you a bit scared of trying it next time. So you avoid it. I think some programmers become so scared of 'doing it the right' because of the risk associated with it that they avoid it more or less all together.

The good programmers are those who manage to get enough experience to reduce the risk before getting scared off by their mistakes.

That's the theory I just came up with anyway.


You should never have to "fit a large problem" into your head, at least in detail, at once. I think it's something of a myth that that's what great programmers do, when in fact they structure their code to avoid it.

I've built immensely complex software and in the end the code was good not because I was smart, but because I am stupid, and adapted the code to work around my limitations. It's the "smart" programmers I worry about :)

The key to conquering complex problems is to layer it into levels of abstractions, dividing it into ever smaller parts that does one thing. Then one step up in the abstraction you don't have to worry about the details beneath. If you think about it that's how programming languages themselves work, we no longer push around registers and instructions, its turtles all the way down.

Your mind should always be able to encompass the level its currently on. That's why reducing the amount of code is important. Increasing the entropy in your code makes it more readable and understandable. It also makes it easier to spot patterns. Whenever your code starts feeling beyond easy comprehension it's time to divide it further.

If a problem feels to large, my advice is just to start doing classes or methods that attacks different aspects of the problem, and then compose those.

There's a nice quote by Linus:

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."

Good structures will shield you from a lot of complexities and simplify your code. Structures are easy to test. Always consider if there isn't a collection or other data structure that could swallow some complexity.

Also, never be afraid of making mistakes, coding is an explorative process and the most valuable thing you can do is to do a mistake and realize it, because that also often means the solution presents itself clearly.

No code is wasted code because even though you might end up deleting all of it the lessons they leave behind are valuable.

In the end coding is a lot like writing a book, get everything down on paper/code without worrying too much and then iterate and move things around until its good :)


I don't personally use functional programming languages on a daily basis, but this, I feel, is the veil between programmers who are highly successful and those who limp along: being able to have a mathematical (side effect free) approach to solving problems. This requires being able to break a larger problem into subsets. The closer you can get your methods to not having affects on the global scope, the better a programmer you will become. In other words, you don't have to be able to keep the whole problem in you head, but you have to be able to recognize how to subdivide the problem into manageable bits.

The other major trap that I think affects programmers is not appreciating what is happening when you transform your data: when you make a change to data state, how does that affect the system? Where are you persisting that change? Do you need to persist a change or is it simply a temporal snapshot of something happening?

Basically if somebody claims to be a "good" programmer, but they don't have a solid grasp on set theory, they I'm sceptical that they will be a "good" programmer in the long run. Naturally you also have to become familiar with the language you are using and the environment you are developing for.


I'm not going to argue that refactoring is unnecessary, far from it. However with each refactor there is always the possibility of introducing a hard to debug regression. Even if you follow rigorous TDD, there is no such thing as 100% code coverage with your tests which means that new bugs can and do creep in.

I guess what I'm saying is that refactoring shouldn't be entered into lightly.


I agree with you to a certain extent but... the road to code hell is paved with "don't disturb the tests" intentions ;)

This is a bigger problem the less continuously you refactor. I've seen this mentality in a lot of projects, where they don't dare "mess with the code" because it might introduce bugs or break the tests... but IMHO this is where I think TDD sometimes goes too far.

It should ring warning bells if your code, tests and development cycle is so fragile that you can't do continuous cleanup. In its extreme this is the lava flow anti-pattern.

It must always be preferable to get the code base as compact and simple as possible but bugs will be easier to find and the code easier to understand for everyone... otherwise you're just accumulating technical debt.

Rather than storing nitroglycerin in a pillow factory it usually is a good idea to make it less volatile.


If you refactor in a way that you improve code readability and reduce complexity, you substantially reduce the likelihood of having an undiscovered bug that is already in your code.


I've found the opposite was a worse problem - programmers who discovered some new "pattern" and would rewrite the whole codebase every week. Endless churn and zero business value.

Code generation and config files are evil. If it affects program behaviour, it belongs in the codebase, where you can search for it, and where it's part of the normal release/review/testing cycle.


And code complete, clean code...


Or you could just code correctly the first time...


That's like writing a book "correctly" the first time without drafting (there are a few authors that does that, but not many). Or drawing a picture without sketching.

Much of the coding process is gaining knowledge of the problem you're trying to solve and discovering patterns as you go along.

Also typically features are tacked on over time, and it reaches a point where new patterns are needed that wasn't needed before.

I agree with you in some sense though, some patterns are low-cost and high-reward in versatility & maintainability. Like using events rather than explicit references, and you should use them from the start as a matter of "good habits".

Some more overreaching patterns like factories, modules, plug-ins etc etc only make sense when the code reaches a certain scope.


William Deming famously said that when focus is on quality, quality rises and costs fall. And that when focus is cost, costs rise and quality falls.

We live in a time where we're encouraged to make mistakes. Refactoring, fail fast and often... I get the impression sometimes that we can go on failing forever and its ok. Everything is awesome. Because its fun writing code right? Just jumping in there and not giving a thought to the step after the next. Managing complexity? JAGNI. My tests passed, didn't they? And we can always refactor.

Getting it right takes discipline, research and experience. I prefer to get it right first time. Doesn't always work out as planned, but to me the need to refactor is a sign to stop and reassess what brought me to that point. And its always a result of having cut a corner somewhere.

Refactoring costs. And upfront design is about as fashionable as leg warmers.


I feel like we're talking about different things...

Continuous refactoring IS focusing on quality, it IS managing complexity. I think you've seriously misunderstood Deming if you think he stood for "Do it perfect from the start". The whole Kaizen philosophy is built on continuous improvement and elimination of waste (redundant code)

I'm not saying do a week/month/year of crappy coding and then "fix it later". Refactor continuously and as often as needed. Writing code is research.

The cost of refactoring is the greatest when you've done it too seldom. Otherwise with the right tools it's pretty efficient these days.

I see larger dangers in doing up front architect astronauting and make decisions before you've internalized all the problems.

For complex project it's of course appropriate to gather requirements, limitations etc and make a rough design, but going into too much detail is setting the cart before the horse... IMHO of course.


One of the coolest things I've learnt after over 20 years in this industry is to seek first to understand, and then to be understood.

We're talking about the same thing.


a3voices, you need to write a book on your technique for writing code correctly the first time. People have been trying to do that for as long as there have been programmers. You will make billions of dollars, guaranteed, assuming that you actually have a way to do that. Quit your job, drop whatever you're doing, and start writing that book right now, it will change the world. While you're at it, would you mind taking a look at the halting problem too? Thanks!


I guess no one understands sarcasm.


> "Yo-Yo code" that converts a value into a different representation, then converts it back to where it started (eg: converting a decimal into a string and then back into a decimal, or padding a string and then trimming it)

I would add: "you designed a language that does these conversions implicitly as part of its evaluation semantics, and thought it was a marvellous idea."


5.3 is actually not bad code, if it refers to what I think it does. Basically:

"Recursive subroutines that concatenate/sum to [...] a carry-along output variable"

Is an accumulator pattern, which is useful in converting certain types of recursive algorithms into tail recursive algorithms with can be handled much better by most compilers (and unrolled into iteration behind the scenes, which means you get clean recursive code that still won't ever run out of stack memory).


Initially this was the reason and I still do this, but it should be noted that Racket, for example, will treat the following examples exactly the same and will do just fine with the non-tail recursive example (as far as I can tell):

    (define (sum/recursive/opt lst [sum 0])
      (if (null? lst)
          sum
          (sum/recursive/opt (rest lst)
                             (+ sum (first lst)))))

    (define (sum/recursive/non-opt lst)
      (if (null? lst)
          0
          (+ (first lst)
             (sum/recursive/non-opt (rest lst)))))
I don't use any other variants of Scheme with any meaningful frequency, but I still use the uppermost variant because I'm just used to it.


signs you're too good of a programmer -

you use monads in object-oriented languages (like Python)

instead of the standard library, you get your algorithms from PDF's you have to pay a journal access fee for.

you believe in good test coverage: so do the ten thousand people who have downloaded your test suite from github.

you leave comments highlighting incorrect behavior... by the standard compiler.

you don't always formally verify that your program is optimal- sometimes you only formally verify correctness.

you're a full-stack developer, but wrote some of the layers yourself.

investors want to meet the rest of the team


> signs you're too good of a programmer - > you use monads in object-oriented languages (like Python)

Are monads a natural fit to Python, or is this a case of "when all you've got is a hammer"...?


Python borrows list comprehensions from Haskell, but the syntax is used in a less powerful way. I wouldn't want to use non-builtin monads in a language without a compile-time type system (one of the usual advantages of monads is they save you a lot of effort testing, but this wouldn't be true in Python), but there's no reason you couldn't.

(If you want an object-oriented language with proper monads then use Scala)


> instead of the standard library, you get your algorithms from PDF's you have to pay a journal access fee for

A sign of a good programmer, but not a good engineer IMO.


"To get over this deficiency a programmer can practice by using the IDE's own debugger as an aide,"

sign of bad programmer: relying on the IDE?

  'always have a security professional review the 
   design and implementation.'
At first glance it reads like a list a hypothetical programmer should do, rather than what happens. For a dose of reality, read this article by Joel about the job of the SDETs at MS. [0] I don't hold MS as an example of security but Joel paints a picture of serious programmers trying to code then verify commercial software used by millions. [1]

Commerce and software demand another set of skills beyond this article. Read the counter-point: 'What Makes a Good Programmer' http://www.southsearepublic.org/article/2024/read/what_makes...

ps: Is there a way to prove the output of complex regular expressions?

[0] Joel Spolsky, 'Talk at Yale: Part 1 of 3', http://www.joelonsoftware.com/items/2007/12/03.html

[1] Then there's the open-source way. How many in the OS systems community really have 'security professionals' review their code? OpenBSD and who else?

(various edits)


Using the tools available to you to their fullest extent, rather than doing things the hard way on purpose just to be more "hardcore", is not a sign of a bad programmer


Definitely agree, however I can also see the shortcomings of the "IDE languages". One of the nice things about non-auto-completable scripting languages is that they force you to write code that you can reason about yourself. IDEs let you write really inelegant code that you can still continue to slog through because of autocomplete.

Another big downside of IDEs is that they constrain your choice of tooling to only the IDE. There's a rich ecosystem of editors, preprocessors, templating, etc. for JS, but (for example) in iOS development you only get xcode's half-working gee-whiz features like the interface builder. In my experience, the more IDE features there are, the more of your program moves into hidden, IDE-specific automagical stuff.

So resistance that you may see to IDE's is not because someone wants to be "hardcore", but because they recognize that it is desirable for code to be able to stand on its own and be understood by a human no matter the editor used.


I especially hate IDE features that don't have a corresponding plaintext representation that you can edit. For example, a graphical layout editor which saves data into a binary blob that then requires a special merge tool which is only available in the IDE.


"recognize that it is desirable for code to be able to stand on its own and be understood by a human no matter the editor used."

That and hardware constraint.


"rather than doing things the hard way on purpose just to be more "hardcore", is not a sign of a bad programmer"

I agree, IDE's boost productivity. The next question I ask is, what's your client? Mine is a tty terminal over ssh. This limits the kind of IDEs I can use, so it's cli/vi/[languate]. Simple tools from a more elegant time.


Why do you choose to do it that way? If you really wanted to use an IDE you could just keep it on a github repo or even dropbox, or if it's a local or very fast connection it could even be reasonable to use X11 tunneling. But, if the CLI environment is the most convenient or efficient for whatever's being done then it's a better choice.


Not the person you were asking, but I do it that way because I work with a software package that bundles a ton of proprietary libraries that would be difficult at best to set up on a separate dev machine and because the "IDE" included with it is just a bunch of glue for CLI stuff anyway (with a text editor literally no better than Notepad). SSH+vim is seriously a better option than the tools I'd have to work with otherwise. Bad IDEs do exist.

But that's a niche tool in a niche field with a niche language.

(It's the Cloverleaf interface engine used in healthcare and built on tcl, for the curious).


"Why do you choose to do it that way?"

One word, RaspberryPi. [0]

Though I use a small laptop as a client and sometimes at the machine with monitor and keyboard, most work I do is on a Pi, remotely. I'll be upgrading my Pi to version 2 and splash out the necessary $40.

Forty dollars, think about that.

All I need to hack with: #raspberrypi & #robertsradio ~ https://flic.kr/p/o1yXKe

[0] https://www.flickr.com/photos/bootload/tags/raspberrypi


Isn't your small laptop a more powerful system? You state what you do, not why you are doing it, which was the original question.

FTR I support your decision to include your own constraints. I believe better things are developed from constrained systems than from without.


"Isn't your small laptop a more powerful system?"

EeePC with win.

Now I could fire up the beastly server I have on my desk with 300W power supply, multiple HD's with linux, but as an experiment I moved all the dev over to a Pi. Add a few more Pi machines and I can use one for image processing, another for dev, another for sound/media & I have the core of what I used to do on a desktop for the price of a fast HD.


You could also mount it as a remote filesystem (such as sshfs) and run an IDE locally.


"mount it as a remote filesystem (such as sshfs) and run an IDE locally."

could but a PIA.


Agreed. People wouldn't appreciate it if I ignored the "interactions" warnings when prescribing medications just because I felt like I should be able to recall every drug-drug interaction myself...


On the contrary. The worst programmer I've worked with was the one who insisted that debuggers were a sign of a bad programmer and one should be able to fix problems without them. Not only was he negative-productivity himself, but since he was fairly senior and many of the other developers were quite junior and listened to him, he lowered the productivity of the entire team.


While this is certainly true, it's worth pointing out that debugger != IDE.


What was is advice ? heavy printf ? or lengthy prior analysis ?


I guess it was visual code inspection... Staring at the screen for hours, trying to figure out why the code doesn't work, while it should work.


"You should understand what the code is doing"


So purely cerebral simulation.


> ps: Is there a way to prove the output of complex regular expressions?

It's not really clear what you mean here.

Maybe you mean "if the input matches this regex, then it can't be used for command injection"? Then you'd just need a proof that something matching that regex can't escape the intended context in the code you're generating. That's fairly trivial if you have a regex describing all escape sequences, for instance.

If they're not really regular expressions, then the answer is more complicated but still generally yes.


Comes from the perspective of the author, see the note:

> This paper is not meant for grading programmers [...] This paper was written to force its author to think, and published because he thinks you lot would probably get a kick out of it, too.

The intent of that remedy is to wean the programmer off of using a debugger and to executing the code mentally.


"The intent of that remedy is to wean the programmer off of using a debugger and to executing the code mentally."

good point.


No, that's a sign of a programmer that kept up with the times and is not afraid to release control.


> 1.4 "Yo-Yo code" that converts a value into a different representation, then converts it back to where it started (eg: converting a decimal into a string and then back into a decimal, or padding a string and then trimming it)

An important exception: many abstractions for data passing end up creating exactly this, but are good practice if you can afford them because it means one side of the yo-yo can be changed without breaking the other.


Likewise, 1.5 "Bulldozer" aka breaking up a function into subfunctions that aren't "useable everywhere".

I can't claim I'm the best programmer, but I've used this to "simplify" programs and reduce nesting to great success.

Add to that Aspect Oriented Programming in C# by using Attributes to add/remove logging to functions is easier if stuff is broking down into parts.

Subroutines shouldn't simply be about reuse. I think that narrows their purpose too much. There is a place for code reuse. There is also a place for functions that only get called from one location.


Thinking about subroutines as code reuse is wrong. Subroutines are named blocks of code. When you're writing a program, you're building a language (or, I'd say, a series of languages) and subroutines are the basic building blocks for that. You want to split your function into a bunch of smaller ones if it makes the whole thing easy to read; reusability is just a (useful) side effect of a properly designed "language".


Unfortunately, a lot of bugs can be caused when a function that was only ever intended to be used from a single place gets called from another place, when it seems to do the needed work.


Here's where nested function definitions can be useful.


That's the one thing I miss from Pascal.


Subroutines are not "named blocks of code" any more than they are facilitators of code reuse. Inline comments around a block of code also matches the concept of a named block of code.

The point of subroutines is to gather the code that implements a particular concept, abstraction or algorithm in one place. Sometimes it coincides with code reuse or splitting your function into a bunch of smaller ones.

The best way to tell if you are using subroutines effectively is if you can explain the effect of the subroutine without describing the implementation of it or (when splitting up a function) what it is trying to achieve for the function that calls it. If you can only describe it in relation to the function calling it, you fail. If you achieve code reuse and it's more than a line of code you get an automatic passing grade, it's just not necessarily a good grade.


Great point. A subroutine is essentially assigning a signature (name, args, returns) to a block of code and never having to look into the details of the block of code again. A bad subroutine is a subroutine that hides nothing: all the internals details of the subroutine leak through the signature.


> There is also a place for functions that only get called from one location

Do it inline. If you need it to look separate from the other context try an immediately invoked function or at worst some comment demarcation. No need to spray functionality across a file(s) when you can do it all inline, especially if it is only called from one place. I'll admit I didn't come up with this 100% myself, but having tried both ways, I really like inlining.

http://number-none.com/blow/john_carmack_on_inlined_code.htm...


Looking at that, and considering A/B/C, I've seen myself using B most often - but I've not been afraid to use any of those.

I'm not a fan of A, because I want to see the "bigger picture" first and then dive into details (minor 1/2/3/...) as needed.

The reason for doing B vs C, in my experience with C#:

* Logging. I can add, via PostSharp ( https://www.postsharp.net/features ) ( or similar extension/addons ) something as simple as `[LogDebug]` to a function and get logging goodness without having to mess with lines inside the code. PostSharp and Log4Net are my goto's.

* Simplification - Partially included above (no need for a lot of logging code with the right tools).

* Mental Gymnastics - its easier to see Step 1, 2, 3 when they are next to each other instead of pages apart.

* Scope - only the parts that get sent in via parameters are worried about, and only a single return is normally needed.

* Nesting - I've used minor functions to remove 3-4-5 levels of nesting before.


Aside from logging (and games don't typically do a lot of logging), everything you mention is covered, and argued against, in the article.


Thanks for this link. A lot of these things were things I've grown to believe from working in game development (and listening to programmers better than myself), but I never had seen it written out as competently and thoroughly as this.


If you're looking more at improving readability than performance, I like Lisp's approach to that, with flet and labels, that let you establish a bunch of functions limited to a scope (just like let does for variables) - you can split your function into few named subroutines and then define them inside that function. So you still get readability, and can keep the code that's only called once near the place it will be called from.


You know, unless my reading was wrong, the conclusion to be made is that: Knuth was right ... Inline seems to be an extreme poor man version of literate programming, isn't it?


Especially when you're aware of the difference between pure and impure functions, and when you believe in the importance of testing. I'd rather test a dozen small, mostly pure functions, and have the function that ties them all together be 'obvious' (and still integration tested), than to have one gigantic complex function that I need to try and test, and any related change causes a dozen tests to break.


I would be suspicious of any subroutine whose function couldn't at least be described in words. So I agree that a function might be practically impossible to use in any other context, but it's worrisome to me when I see a function that doesn't do something that can be described in words.

In most cases, after factoring out a function from a bigger function, it's usually a good opportunity to think about what this function really does, and to make it do something conceptually simpler (even if that is less tailored to its sole use case). This way, you can even write meaningful tests for the new function.


I agree, it's very seldom a method should be longer than 20 lines, and optimally around 6 lines. Any method over 50+ lines should require a long hard look if there isn't a way to simplify it. Inlining everything only gets messy.

I would say that if you have a method composed of many methods you might want to encompass that into a separate class. That way you have everything needed to look at that method nicely in one place rather than interwoven with another class.

Depends on the scope of the method though, you want neither a blob nor a poltergeist.


Came here to mention, yo-yo code sometimes exists as a workaround for legacy implemenations: take XML-RPC for example, and the fact that the data types that can be passed are limited, and int is fixed at 32-bits. What happens when you need to pass a 64-bit int? You either need to convert it to its alphanumeric string representation or convert it raw into a character array, and then convert back on the server side. Often converting to the numeric string winds up producing more readable and maintainable code.


Neither the good programmer nor the bad programmer portrayed here would make a good employee/consultant, the former b/c things will break while the latter b/c things will never get delivered. They say software is eating the world and I believe this is true but not b/c genius/geek programmers abound but b/c there are hordes of smart good-enough 'programmers' out there putting good enough solutions, and yes, most will not be able to write a recursive function but who cares: the solutions run and solve real problems. That being said, we must thank the gurus who put up the layers that make it posible for the makers to bring out such solutions.


I agree. It is similar to that old phrase "perfect is the enemy of good." Sometimes what an idea needs is something minimal that gets the product out there and tested, shipping and selling in the real world. Sometimes the most financially effective programming isn't elegant, but rather something quick and dirty, transitory and ugly: something meant to be used and improved on or discarded for something better if and when time allots for it in the future.


> 5.1 You don't use any ergonomic model when designing user interfaces, nor do you have any interest in usability studies

Oops - almost guilty. Any suggested reading?


I'm guilty and somewhat afraid of going there because of what I see. Most websites and web-apps are the opposite of ergonomics and I sometimes wonder if that isn't the result of the current state of UX field, that focuses on shiny things that you can sell easily instead of building tools that work. The trend I see in UX is that of dumbing-down your software and enforcing one, carefully-drawn user flow, that is pretty but not ergonomic.

I don't want to demonize the entire field - it's just a worry I have, from observation.


In my small amount of experience, reasoned application of Fitt's Law goes a very long way for ui design.


The Humane Interface, by Jef Raskin, goes into the basics, with examples and tests, including synthetic usability measures that can go a long way.

There aren't a lot of good tools or frameworks, unfortunately; you have to do the math yourself.


I think I'm somewhat guilty too, but I did find a cool tool that can help: http://peek.usertesting.com/


>Incompetence with Regular Expressions

Well, method acting here I come!


I hope "being able to use an online tool" counts as competence...


My take on the opposite - what makes a good programmer: http://henrikwarne.com/2014/06/30/what-makes-a-good-programm...


These articles always make me feel insecure and for some reason I continue to read them.


6. Distrust of code

PHP/C++/Javascript

4. Dysfunctional sense of causality

Symptoms

When called on to fix a bug in a deployed program, you try prayer Your debugging repertoire includes rituals like shining your lucky golf ball, twisting your wedding ring, and tapping the nodding-dog toy on your monitor. And when the debugging doesn't work, you think it might be because you missed one or didn't do them in the right order

Once again php ...

So it seems I am a bad programmer after all.


My definition of a bad programmer is one who does not want to learn and progress. Everything else is just levels of experience.


> Credit card numbers or passwords that are stored in an unsalted hash

What's the purpose of hashing a credit card number? For passwords, please use a proper KDF instead of reinventing salted password hashing. While it's widely known not to build your own crypto, most people still write stuff like sha256(sha256(password) + salt).


A lot of comments here take it on a political level, even if the discussed considerations are (mostly) technical. (I am personally touched by a few points in there and I understand the temptation to go that way myself.) The reasoning behind these "signs" however, is worth its weight in gold and I only hope people will remember and come back to it with cooler heads. Critique (and especially that kind performed on oneself) is almost always necessary in order to get better, even if it hurts. Seeing the objections raised, I was tempted to recommend such kind of critique only if there wouldn't be any chance of breaking you, but I can't anymore. If you are to fail - fail early, fail often.


[deleted]


> The only metric I care about for a bad or good programmer is simply the ability to produce a high volume of code, quickly, with no bugs.

"This code is unmaintainable" is a bug.

There are exceptions. They don't apply to you or your business.


I would instead measure the quality of the programmer by the volume of problems solved / tasks completed, aiming for the least amount of code possible, or preferably no code at all.


> Using whatever syntax is necessary to break out of the model, then writing the remainder of the program in their familiar language's style

Depending what you're breaking out of and into, and what your work looks like, this could be idiocy or genius. :)



Working with HTML/CSS has made me feel like I've got a Chronically poor knowledge of the platform's features. In 6 years I've not been able to form a coherent mental model of how it works that lets me actually go "oh, I need to do Foo? I'll just do Bar and Baz."

Is this a sign that I should stick to working with scheme, Python, Go, JS and other actually Turing-complete tools and my mind isn't made for layout languages? (I have the same problem with Tex.) Or is there something I can read that will finally help this make sense? Or do I just need another 4 years?


I half-agree with your other respondent, olavk. If you're deeply entrenched in imperative-style coding and are just missing familiarity with - what's that other word? Prescriptive programming? - then by all means, you want to get down with it and get some practice. Set yourself some "toy" tasks and learn by doing until you're familiar with the concepts and effects. But meh, maybe after 6 years you've been way down that route.

Something I've found helpful is to interact with a good HTML/CSS debugger such as the one in Chrome. You can highlight a chunk of your output page and see how the CSS comes together at various levels to create the final look of it. IIRC Chrome even lets you change those settings on the fly in the browser to experiment.

On the other hand, you have my complete sympathy insofar as I find that model very cumbersome in a few places. It feels to me like something designed by a committee, and one very intent on reducing CPU load. There are things I can do with the table model, especially stuff like proportional columns or regions whose dimension is a constant +/- a product, that are like pulling teeth with the CSS layout model. There's a reason there are huge collections of "tricks" advice on how to do these things, which should IMO be obvious to a "programmer" type if they actually were.

On the other hand, maybe you just need to let go. TeX especially is highly opinionated about how it slaps text on a page, and allegedly those are the opinions of experts in the field. So if your output is not looking like your mental model of the expected output, maybe your mistake is being so specific in your expectations? Both CSS and TeX allow you to micromanage your output more or less down to the pixel level, but very often I'm quite glad to let the layout engine do its job while I do the rest of mine.


I think you should teach yourself the fundamentals of HTML and CSS. It is not magic and its not even that complicated. But if you have worked in frontend development for years, with frameworks, browsers specific workarounds and so on, you tend to miss the forest for trees. If you can program, you can definitely learn to understand layout languages.

The problem is perhaps that we are aware that programming is challenging, while layout languages are considered "easy" so never really studied in depth.


Right, that is what I'm getting at: How does one actually teach oneself the fundamentals of HTML/CSS though? Off the top of my head, I know I've read through

- http://learnlayout.com/

- http://www.html5rocks.com/en/tutorials/internals/howbrowsers...

- http://htmldog.com/guides/html/beginner/

- http://htmldog.com/guides/html/intermediate/

And long ago did the same with w3cschools.

But it feels like the box model only accurately describes how browsers work about 70% of the time. How do I learn to be able to predict at least 95%? I have suspected that the right approach is actually to take 50 or so layouts sketched on paper and do them. Is that actually reasonable?


"Using whatever syntax is necessary to break out of the model, then writing the remainder of the program in their familiar language's style"

This one really rang home with me, having previously dealt with developers who worked in both .Net and PHP with frameworks that handled models very differently.


What's up with 5.5?

>Your program produces output to be read by another (eg: a browser)

This is a problem?


> Your program produces output to be read by another (eg: a browser), or implements a network protocol, and relies on the other party's software to be significantly tolerant to spec violations

The problem isn't producing output for another program, but relying on that other problem to catch your mistakes


Ah. I parsed that sentence incorrectly.

I read it like this:

> Your program produces output to be read by another (eg: a browser), or (implements a network protocol, and relies on the other party's software to be significantly tolerant to spec violations)

instead of like this:

> (Your program produces output to be read by another (eg: a browser), or implements a network protocol), and relies on the other party's software to be significantly tolerant to spec violations


There's a difference between being a bad programmer and bad programming.


Inability to comprehend pointers

Inability to determine the order of program execution

I don't think that pointers or mutable variables are important, when you are a pure functional or logic programmer.


I'm just nitpicking here, but

> Recursive functions that don't test for a base condition

Base conditions are not necessary in lazy languages.

Example in Haskell:

    fibs = 0 : 1 : zipWith (+) fibs (tail fibs)


Ah but that's not recursion, it's corecursion ;)


You are right. I had heard of corecursion before but never thought of it in this context. Thanks to your comment I feel like I have finally groked it :)


The author of this piece should not fret. Soon enough computers will program themselves, and they will be vastly superior in this task to their human counterparts. Perhaps then the author will bask in the glory of programming perfection while waiting to be processed in the unemployment line.

Seriously, get over yourself. If the programmers you hire fail at their task, blame your lack of leadership, mentor ship, or hiring skills. They can't all be the A-team, can they? Learn to win with the B-team.


What does homebrew "business rule engines" mean? Are there a bunch of them already made? What would be an example of this?


Call me overly sure of myself, but it is painful reading this since I can relate a lot of it to my co-workers.


A bad programmer is one that has worked for many years in the field and fails to help those less experienced.


Still fun to read. Could anyone shed some light on author's background that led to this piece?


(eg: calling the save() function multiple times "just to be sure")

Who else does this for apps with a UI?


Bonus points for including remedies.


Symptom: fears that the AI will (in the near future) be writing programs for us, so why bother...

Remedy: ?


Ok, time to give up my career. I just realized that I'm a useless programmer.


Surprisingly good. I got a good chuckle recognizing myself in some of these.


>>4. Inability to comprehend pointers

I love C, it's my favorite language.... but there is no way that pointers are important enough that not knowing them == bad programmer. There are plenty of good developers who've never even had to deal with pointers, even in 2012.


Well, it's necessary to the small extent that you need to know whether something is passed by value or passed by reference. And in order to understand that, you need to at least know what a pointer is. I don't think it's fair to say that "plenty of good developers" haven't ever needed to think about pointers. Maybe in newer languages they won't fall victim to a few of those "symptoms", but they at least need to know what they are.


Ah, so the pass-by-value vs pass-by-reference is a different topic IMHO than int *x or &x. The implementation of both I'm sure are extremely similar if not exactly the same, but I agree that if a developer doesn't understand pass-by-value vs pass-by-reference then that's a serious problem.

I'm probably showing my age that I consider the pass-by-reference concept in modern languages something that hides the concept of pointers rather than something that indicates the understanding of them. Get off my lawn, etc.


These posts are ironic.


Fantastic!


yup...and this is why I stopped. because of this kind of tone that I found toxic in actual job settings but then again it's points are fairly valid to filter only the strongest programmers.

Tell engineers to come up with their expected salary in a room and they will fight and compete with each other to work for free. The rationale ones, or the 'bad' engineers seemingly walk out because they still have the sanity intact.

Interestingly enough, companies and employers know this about engineers, and would be more than happy to screw you over what you actually deserve. All they have to do is sit aside and watch the professionals sabatoge each other. They are the perfect employee. Fire them once they build the pyramid, and find slaves overseas to maintain it.

This is what I realized at the end of the day. All I care is that I turning ideas into real products that work. If these are product that I can sell on my own, even better. If my skill and experience allows me to do this over and over, then it is good enough of a skill for me. Might not be a 10x engineer but neither do I want to be involved in a gladiator matches anytime we have to build something but end up arguing over irrelevant things, and end up delivering late.


Yes, that is the worst part to me. I worked at a place once, many years ago, where a Lead software engineer came in and started what I could only equate to a turf war. He'd set people up to get them fired, call competent engineers who made a mistake out as being idiots, etc.

One time, he assigned me work - and never mentioned that someone on another team had been assigned the exact same work. He wanted to see me do it faster than the other guy so that he could fire the other guy. So when I completed it, and he used it as ammunition against the other guy (even though the other guy had done a perfectly good job it turned out). I stood there helpless and ashamed that I'd been used as a weapon for someone to tear another good programmer down.

A couple of months later, the company began to lay people off and that lead bolted for the door. I did too, but I was just glad to not have to work in such a hostile environment anymore. So much anxiety.


That is revolting. I hear a lot of nightmare stories like this. Unfortunately, I think as engineers, we are forever doomed to be treated like peasants, unless you know how to play politics, kiss ass, and move up into a manager position.

The concept of manager seems so old fashioned and backwards especially when we are now dealing with a manufacturing industry that produces nothing physical, where the concept of a product is actually just a vote of minds of all the stakeholders, engineers, customers. Having traditional managerial that were suitable for overseeing factories and suppress workers from rioting and maximizing labor output in an information based product is nothing short of censorship.

My idea of a dream team of software engineers is one where we have as little technical and business censorship as possible. It would be sort of like how day trading desks operate but we end up building one product that reflects the demands of the customers, the future market. Instead of having a manager telling you what to do or what to build, you act on information from a customer directly. This might be a tougher hybrid to find but I reckon we'll see more and more entreprengineers as traditional manufacturing corporate structures lose their appeal.

Wishful thinking but I hope some people share the same ideas and values.


>"Bulldozer code" that gives the appearance of refactoring by breaking out chunks into subroutines, but that are impossible to reuse in another context (very high cohesion)

>3. Difficulty writing functions with low cohesion

Is there some definition of cohesion I don't know, or is the author confusing cohesion and coupling?


From Wikipedia: "In computer programming, cohesion refers to the degree to which the elements of a module belong together.[1]", where [1] is Yourdon & Constantine 1979.

It appears that author uses cohesion and coupling - and rightly so, by the above definition - as synonyms.


Cohesion and coupling certainly are not synonyms.

Cohesion is about how much the code within a single module deals with the same or closely related concerns, so that behaviour "belongs together". High cohesion is usually considered desirable.

Coupling is about how much multiple modules depend on each other. For example, do they communicate only via a clearly defined interface, or do they have implicit dependencies like sharing memory or resources? Loose (i.e., low) coupling is usually considered desirable.


Point number 9: "using HN for programming advice or guidance" - relying on HN with it's fixation on the latest shiny (are we still hyping Go and React today?) and it's endless pursuit of academically useless tools (big shout to Haskell, any meta language built upon yet another interpreted and JITd language).


Why would you say Haskell is academically useless?


Why would you respond to a troll?


They were willing to debate my statements, it seems you prefer to throw labels around in place of a single argument or rebuttal.


You're saying things that are not even wrong, from an account created one hour ago with a name clearly tied to the topic. I'd say obvious troll is obvious.


A troll you can argue with isn't a troll. Its just too easy to exclude people because of there non-conformism ...


> Using cut-n-paste code from someone else's program to deal with I/O and Monads

> Failure to implement a linked list, or write code that inserts/deletes nodes from linked list or tree without losing data

> Difficulty seeing through recursion

I would argue that using monads, implementing your own linked list or using recursion is sign of bad programming... But I am just Java cowboy, so thanks for the enlightenment!


> implementing your own linked list or using recursion

I believe I can salvage this piece of the comment, and make it into an intelligent thing to say.

Implementing your own linked lists? Not good unless your language is stone-knives-and-bear-skins land. You need to know what libraries are out there and available to you, and if CS undergrad stuff is unknown to the libraries in your language, pick a different one if you possibly can.

Recursion is an important skill, but implementing your own recursive functions is low-level. It's error-prone and it's boring in precisely the ways that cause errors.

Implementing your own map function is worse than implementing your own linked lists; it's more like implementing your own while loops out of labels and goto statements. It's an invitation to have bugs in your program's flow control, which is the worst possible place for them to be.

Sometimes you need to write your own recursive function, because none of the basic recursive patterns your libraries give you can fit. Sometimes you need to implement your own foundational data structures, too. Both of these should be done with skepticism, primarily of your own code, but also of the very idea that you'd actually need to do it in the first place.


In Java you also need recursion, even if it is not ubiquitous as in functional languages. Say you want to traverse a tree structure - a directory hierarchy for example - you need recursion.


> I would argue that using monads [...] is sign of bad programming

Monads are a fairly common mathematical abstraction. You might as well say "Using addition is a sign of bad programming." You use monads all the time. You probably just don't call them that.


Have you ever used a filesystem?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: