Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Due to a series of coincidences I've worked at 3 companies in the hiring space in my career, and I've personally performed over 400 tech interviews. I've also spent 2+ years teaching programming. I feel like I can answer that question - though I'm sure plenty of my colleagues would disagree with my answer.

Essentially I agree with you - most of the reason is cargo culting Google.

Assessing software engineers is hard. A couple decades ago, Google (which at the time was a tech darling, and the #1 place to work) had a saying of "A players hire A players. B players hire C players". Essentially they were terrified of hiring bad people, because they figured the company would inevitably go downhill if they did. Their hiring process was essentially an expression of this idea - it was based on the philosophy of "we're all A players, but are you as clever as us?". Interviewing at google at the time involved sitting about 6 back to back whiteboard interviews with programmers. Each person would spend ~20 minutes asking you their favorite puzzles and things, and seeing how you did. Nobody can say this because it would be illegal, but it was in many ways a programming themed IQ test. Good questions were the ones which filtered candidates out. And its easy to recommend against hiring someone if they couldn't reverse a binary tree on a whiteboard in 20 minutes. (I mean, thats easy for me! They must be a C player.)

Other companies followed suit. I mean, hiring is hard. Why not just copy Google's approach? Microsoft did something similar. Facebook was full of ex-googlers, etc etc.

The problem is that being able to reverse binary trees doesn't correlate with how well you can manage a database, style a form, fix a memory leak or talk to your team. And the people who only had those useful skills are unhireable. Oops!

In my opinion, the right way to interview programmers is to make a list of skills you want your programmers to have (coding, debugging, CS knowledge, communication skills, architecture, ...) and then find ways to assess each one. For example, to assess debugging you can give your candidate some pre-prepared code with failing test cases and see how many bugs they can fix within 30 minutes or so. But that requires preparation and test calibration. Most companies struggle to convince their engineers to interview someone for 20 minutes - let alone spend a few days putting together a problem like that.

Knowledge of data structures and algorithms is useful, and it is a positive signal about a candidate. But (depending on the role) I'd weight it below communication skills, raw coding skill and debugging. Those are all much more valuable. We need to start treating them as such.



>Essentially they were terrified of hiring bad people, because they figured the company would inevitably go downhill if they did.

I heard that Google search is now performing badly in many key areas.


It is indeed, by my own subjective observation.

But that doesn't mean it's the fault of the engineers - and most likely it isn't.

Rather, the product people (who are also not dummies) basically realized that "dumber" results were more profitable, for various reasons -- most likely to do with "engagement" and prioritizing what 90 percent of the users will versus the needs of the other 10 percent.


Yet one should be very skeptical about such claims. We don't know the parameters what they are optimizing for so how can we even asses it's performance related to those?


I hate medium and hard LC questions in interviews, it's very hard for me to not get panicky, so I'm surprised i'm going to argue against what you're saying:

If you want to test raw coding ability, asking someone to implement some very basic graph or tree traversals is a pretty good way to see if they know the basics of conditionals, loops, recursions, maybe hashmaps.

If you want to see someone debug something, and you make them run their code it will inevitably fail or not compile the first time...so they'll have to debug.


> If you want to see someone debug something, and you make them run their code

I hear what you’re saying, but that doesn’t assess what I want to assess. We all have experience debugging our own code, that we just wrote. But how well can you read someone else’s code? How well can you find and fix bugs in it? It’s a different skill! And it’s vital in a team setting. Or when you’re depending on complex 3rd party packages (which is most of the time).

I want coworkers who can read the code I write and debug it when it breaks. That’s a much more useful skill than what leetcode problems train.


Funny thing is that A players can go downhill not because of their engineers, but because of their management. How many companies have we seen in tech go under because of bad engineering vs bad management?


> For example, to assess debugging you can give your candidate some pre-prepared code with failing test cases and see how many bugs they can fix within 30 minutes or so. But that requires preparation and test calibration.

The tricky thing is that no matter what test you contrive, it's more likely to say something about the developer's recent experience than about their competency in general.

For example, I'd say I have pretty good intuition for when to just read code or sprinkle printfs or fire up valgrind/gdb/asan when debugging C. Which I guess is to be expected given that I've been doing C almost exclusively for many many years. I'd do pretty bad with Haskell; the last time I really used it was around 13 years ago. The next guy might be a bit lost with gcc's error messages since the last time they used C in anger was 5+ years ago for a small project, but they'd do well if you hand them Python code that uses a well known unit test framework or whatever. I guess that's fine if you're a run of the mill crud company looking for "senior <foo-language> developer" but not if you're after general competency.

You can try hard to make the debugging be more about the system than about the implementation but it's not easy to separate the two. You can make different tests for people with different backgrounds but that only makes calibration harder.

One trick I've seen a company do is deliberately pick a very obscure language that most people have never heard of. That can eliminate some variables but not all of them (I took the test and did well but I also spent a fair amount of time studying the language to figure out if it's suitable for a purely functional solution before handing in a very boring piece of imperative code). Ultimately it wasn't much more than a fizzbuzz.

And if there's puzzling involved, I'd say there's an element of luck involved. At least that's how I perceive the subconscious mind to work when you're thinking about a problem that isn't immediately obvious or known to you beforehand. Which path does your mind set you on? Are you stupid or incompetent if it happened to pick the wrong one today and you spent 10 minutes thinking about it too hard? Are you super smart if the first thing that came to mind just happened to be the right one and you didn't doubt yourself before blurting out an answer?

If you're lucky and know the problem beforehand, you can always fake brilliance: https://news.ycombinator.com/item?id=17106291

That is to say, test calibration is hard and there are so many variables involved. It follows that there's no obvious right way to conduct interviews. And I guess it follows that companies who need people (and aren't necessarily experts at interviewing) effectively outsource the problem by conducting the same type of interviews they've seen elsewhere. Maybe that's less cargo culting and more just doing whatever seems popular and good enough?


The best solution I’ve seen to this (if you have the time, and are interviewing for a variety of roles) is to have the same code (& same bugs) in a variety of languages. And let the candidate use their own computer and their own tools to work on the problem. If you’re hiring for a python role, get the candidate to debug python code!

I’ve done hundreds of interviews like this, and it’s fascinating watching what people do. Do they read the code first? Fire up a debugger? Add print statements and binary search by hand? I had one candidate once add more unit tests, to isolate a bug more explicitly.

After hundreds of interviews I still couldn’t tell you which approach is best. But if there’s one trend I noticed it’s that more senior people (& older people) seem to do better at this assessment. Which is fascinating. And that implies it’s not simply a test of what tools the person is familiar with most recently.

As for luck, I agree this is a problem. It’s mitigated somewhat by having a bunch of easy bugs to fix instead of one hard one. But even a debugging problem like this should be one of a set of assessments you perform. If you fail 5 small assessments in a row it’s probably not luck.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: