Hacker News new | past | comments | ask | show | jobs | submit login

From what I've read from you in the past (as well as the way you're mentioned in the article), I think we probably agree.

If I had to reduce the debate to two clear groups, it's that one group believes that there's a bright-line distinction between "good" programmers and "bad" programmers, and that the questions-at-a-whiteboard model is an effective way of discriminating the groups. The other group doesn't believe in partitioning the world into two clear groups.

It isn't debatable that there are a lot of people who can't write code. I believe that trivial phone-screen coding problems are pretty effective at weeding those people out. Beyond that, it's much more about candidate/position fit, and that's a more subjective question. The problem is that programmers tend to be uncomfortable with subjectivity, preferring instead to posterize the shades of gray -- that's how we end up with the apocryphal Google engineer who can't code.

In your stated example (person hired; can't fathom basic realities of C pointers), I doubt that it reflects a complete inability to be functional in some programming role -- just not the one you were needing. Most "Rails programmers", for example, would probably be hopeless in that situation, but perfectly capable of cranking out websites with Rails. I've seen a fair number of real-world programmers whose (lack of) knowledge of algorithms and data structures makes me cringe -- and I certainly wouldn't hire them for the kind of work I tend to do -- but I have to accept that they're effective within their niche.




We agree more than we disagree. Most importantly: the big problem I see with our industry is our inability to detect aptitude in candidates who don't "show" well in resumes and interviews; in other words, the opposite of the problem of being "too selective". It sounds like we agree on that.

On the other hand: I think the counterintuitive thing about ineffective programmers is that many of them are perfectly capable of reasoning through a CS problem in Java or C++ or Python on a whiteboard. But they're --- for lack of a better term --- useless when it comes to actually getting real-world systems built.

My thoughts about why this happens aren't yet well-formed. I just want to contribute the perspective that the "bad programmer" problem isn't just about people who can't buzz a fizz.


> On the other hand: I think the counterintuitive thing about ineffective programmers is that many of them are perfectly capable of reasoning through a CS problem in Java or C++ or Python on a whiteboard. But they're --- for lack of a better term --- useless when it comes to actually getting real-world systems built.

Software engineering is a performative art. Sure, there is a heavy intellectual component, but you actually have to do actual software engineering to get good at it. You can't just study the theoretical aspects. It's like how you can't learn Haskell just be reading a book and some websites. And yet, someone could go through school for CS and slide by without actually doing any real engineering / programming. I had a CS graduate TA (for a CS discrete math class), at a very well regarded school, who literally did not know how to compile a C program, full stop. He was very into the pure-math side of CS...but still.

As an analogy - you can imagine that someone who has studied music theory for years, has perfect pitch, knows how all the various branches of music relate to each other, which artists influenced whom, etc...could still be shit at playing musical instruments or creating new music.


That points to another problem: Computer Science is not Software Engineering. Yet a lot of the time we insist on hiring CS people for our Software Engineering jobs. It doesn't help that there's not really a clear definition of what good quality software engineering is.


It's an interesting problem. Before I started interviewing I expected to get a ton of really good candidates and have trouble being selective for lack of being able to find anything wrong. Boy was I off the mark. The number of people who don't pass the fizzbuzz test is quite high. I think if we tried to give the "sleeper" candidates a break we'd let in so many bad ones we'd quickly get overwhelmed.


We thought so too. But the (nearly) resume-blind process we settled on quickly converged on a nearly all sleeper candidate pipeline; in several years, we hired I think just 2 people that had our field on their resume. We didn't just retain all those candidates: we were knocked on our asses by how well they performed.

Recruiting is two problems: outreach and qualification. We did novel things on both fronts. For this thread, I just want to be pointing out: the changes we made to qualification were critical, instrumental, fundamental to our success. Most of our best hires could not have happened had we qualified candidates the way we did in 2009, and the way most firms do today.


I have a close friend that joined matasano about 2 years ago. While a good chunk of the readership here is familiar with some of those changes, I think your knowledge is an example of where the 'lessons learned' should be shared as widely as possible. I'm no marketing expert but if someone can find a way to make some of your ideas go viral that would be great for the industry.


Can any of this be attributed to salary negotiation or "better" options being available to those already established in the field, such that a large commitment seemed unnecessary to them? For those who were established, where did they drop out of the pipeline and why?


FizzBuzz is easy iff you are aware of the % operator. I only know how to test for divisibility (cleanly) because I looked it up in the course of doing Project Euler. I could easily see there being developers who worked exclusively on non-mathy business logic and never had to test divisibility, which would explain floundering on FizzBuzz. (Then they should be able to reset counters every 3 and 5 iterations or something, but that's a tad more involved than the canonical solution and might be regarded by some interviewers as incorrect.)


Fizzbuzz is useful because people fail it even if you give them info about loop structures and info about mod operators.

It's only sometimes about people not being aware of a language feature. It's usually about people not being able to program.


fizzbuzz can be modified to remove knowledge of the modulus operator. "write a function that loops through an array and prints 'fizz' if it equals the parameter 'a', 'buzz' if it is -a, otherwise print the value of the element". Or something like that. This tests writing a function declaration, passing values in as parameters, iterating over an array, writing an if statement, and printing. That seems pretty basic and fair to me.


Are you a js engineer? It's hard to imagine someone could write c, c++, java, python < 3, or probably c# without cleanly understanding the modulus operator, because

   System.out.println("" + 3/4);
(or the moral equivalent) prints 0 in all the above languages. Every developer gets bit by integers performing integer division.


Uhhh, the mod operator is not the solution to that problem though.

In reality you'd only use a mod operator if you were doing things like sorting into 4 columns or doing an operation on every 3rd thing. And it's not a concept that is introduced at school. Now I try to think of it, I think I only picked up mod when I was learning rounding in my first language and the man page happened to mention modulus at the same time.


The mod operator isn't the solution to that problem, but that problem shoves integer division in your face. If a dev sees that and isn't curious enough to understand there is a division operator and a remainder operator, just like when you studied fractions in 3rd grade, I don't know that I want to work with that person.

And every dev should have hit the remainder operator at bare minimum when they had a long running loop and wanted to print a status every kth operation, eg

   for(int i=0; i < 100000000; i++){
     // some operation
     if(i % 100000 == 0)
       printf("** operating on count %d\n", i);
   }
or when processing a big file, printing every k lines; or when running a slow operation, printing every k seconds; or ...


No, because for most of those problems there's an easy alternative, declare another counter variable and you can just go:

    z++;
    if(z>100000) {
        print x;
        z=0;
    }
When I was taught maths there was no emphasis on remainders and it certainly wasn't denoted with a % sign. I vaguely remember writing something like 12r3 in primary school. The r meaning remainder.

A brief search on SO and lo and behold:

http://stackoverflow.com/questions/1504420/c-what-does-the-p...

Viewed 38,000 times. Every language probably has a similar question.


While I learned about the modulus operator at a young age, I grew up programming in environments where "nobody" used floats because the CPU's in question didn't have FPUs, and floating point operations resulted in costly library calls.

While only "older" (I'm 39) developers will be likely to have been in that situation, it took another decade after I moved onto hardware with FPU's before I worked on anything where we actually used floating point math.

Instead we'd be working with fixed point stored in integer. E.g. for financial systems, floating point is a nightmare. Working with fixed point to whatever number of decimal points our accounting department wanted (5-6 typically) for tax calculations and the like was preferred.

So there are large number of areas where people can have worked successfully for many years without ever using floating point.


I mean, I know about the mod operator because I did Projrct Euler problems in middle school, but if I hadn't, nothing I've done since would have made me learn it. The extent of the math I've had to do was tracking send buffers in C, which was just addition and less than/equal to.


A naive implementation of % is pretty trivial, so even if they weren't aware of the operator, I would at least expect them to write an equivalent function.


But they're --- for lack of a better term --- useless when it comes to actually getting real-world systems built.

Part of the issue here is that each of us has a different understanding of what it takes to get a "real-world system" built.

That's somewhat obvious because each of us has different "real worlds", so even within the same language/ecosystem there is a huge disparity in what skills are required to be genuinely successful in different teams/organisations/applications.

To put it in concrete terms, when I worked in banking, the main impediment to building successful "real-world" systems was getting clear, unambiguous requirements that could be translated into something implementable. Some of the best developers in that organisation were excellent at their job because they could take the incomplete and inconsistent desires of a banker, and turn that into a coherent and complete "world view" about the system they were required to build. We were primarily a Java shop, and they could produce an adequate application within the frameworks available to them, but I shudder to think what they would have said if you asked about "coupling and cohesion" or the java memory model, or how dynamic proxies work, or even how they should choose between making a field a short, int or long.

So, for my current (startup) organisation, I wouldn't hire those people - despite them being instrumental in producing some of the most satisfying and successful (ROI) systems I've been involved in. This team needs people who can do hard engineering, and if my former colleagues applied for a role here they'd end up looking like the proverbial "inept programmer" that we're debating.

This is a reflection on their skills, but it's also a sign that similarly named roles ("software developer") in different teams require and cultivate different skill sets.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: