Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you test whether they're problem solvers that also know how to code, without leet-code interviews?


Giving people real world problems and pairing with them on finding a solution.


I'm retired now, but I used a variant of this that worked well for me. In my experience, much of what we do as engineers is learn new tools and application domains and apply what we learn to solve problems. So I wanted to test how good a candidate is at leaning and whether they appear to enjoy it (good and enjoy often correlate).

We would work together at the whiteboard, with me teaching them the basics of our application domain and our internal distributed computation framework. Then there were a series of 5 increasingly difficult iterations we'd work through. First, do a simple calculation. Now there's a need for a more difficult computation. And an even harder one. Then explain that it needs to run over billions of log lines, and it's I/O bound, so let's make it faster. Finally, a problem that requires them to probe me deeply about the nature of the data and only needs a hand-wavy answer (calculate a median in a single pass with limited memory - not too bad once you extract from me that the samples are all integers between 0 and 500). To the candidate, this looked like iteration driven by evolving requirements, not 5 interview questions.

I mostly let them drive, but I jump in now and then to keep things on pace. And I would try to answer any question the same way I would as a teammate / mentor.

They don't need to get every problem right. The increasing difficulty is designed so that almost everyone gets something right and something wrong (< 1% gets all the way through the last one). That's an opportunity for me to give supportive feedback to see how they respond. The best responses to feedback yielded something like "oh cool, I can also use that over here to simplify the code". The worst was someone insisting that you can flip the order of addition and division without changing the answer (i.e., order of operations doesn't matter), me saying "I think it does matter, try doing 1/2 + 1/4 in different orders", and them saying "no I have a math degree, I know what I'm talking about."

What they do need to do is be able to solve problems collaboratively, extract learning from an eager mentor, and extrapolate from those learnings to solve problems. The best candidates start finding trade-offs across the application domain / tech tooling boundary that I haven't even brought up.

One of the downsides is that it takes a while to calibrate as an interviewer, and that calibration is not entirely transferrable across interviewers, as candidates will respond differently to different interviewers. I did this interview thousands of times (my record was 14 times in one day, and frequently 20+ times a week - we were scaling an org from ~50 engineers to over 1,000). I never stopped learning about signals I was seeing, but I was mostly calibrated after a few dozen candidates. Leet code calibration is much more transferrable, but what use is that if it's not measuring what matters?

One thing that I think leet code interviews get very wrong is that interviews involve two sides trying to evaluate each other. Wearing people down with leet code is not a good pitch for your work environment. Many candidates told me at the end of my interview that they were shocked at how much fun/educational an interview could be, and many hires said that this interview style was the reason they accepted our offer over more lucrative offers, because they felt they would learn more and have more fun with us.


    > One thing that I think leet code interviews get very wrong is that interviews involve two sides trying to evaluate each other. Wearing people down with leet code is not a good pitch for your work environment.
This is huge and is underrated. Even when I have (_rarely_) passed the most bruising interview process, my emotional state was so negative towards the company. Part of the OP's emotional state (exhausted / burned-out) is the result of an adversarial interviewing style. With your style, you can gently push people to the edge and see how they perform. Your style is more dynamic, which more closely matches real world working conditions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: