That's why the interviewer needs to be competent enough to be able to sniff out BS. Falling back on algorithm questions instead is a form of laziness. That's what the "curated golden path" comment is all about - there's a known standard way to get a job at one of these companies, and following it is not a very intellectually interesting exercise.
It doesn't sound like you fully disagree, because writing OSS isn't that curated path, but sadly none of the devs I know who focus heavily on algorithmic whiteboard question performance look at open source projects. And they could justify this with the same one you give: people lie, people copy/paste code...
I've pushed my team quite a bit away from where they were when they hired me in terms of whiteboarding focus, and we're better able to hire senior candidates now than they were back then. The BS detection is a big part of this: ok, you were on a team that did [really cool sounding thing]. What part of it did you do? What specifically did you learn from it? Even in aggressively paced interviewer loops you probably have at least 45 minutes to get them to answer those questions, if they're ducking and dodging, or answer back with wrong information, or with stuff that makes it clear they used the wrong tool for the job and didn't know how to find the right one, then there you go.
It's easy to present code, but it's also easy to copy or memorize code. Some of the most impressive candidates I've seen are the ones where the "let me ask you this stuff about your past experience" discussion expands to fill the whole time slot and we never actually look at any code, because we're too busy talking about how you took a service from a single database to a multi-master geo-distributed easy-failover alerted and monitored system, or how your random side project to look for Hidden Markov Models in baseball player's batting results went (how'd you store the data? what libraries did you use? did you implement your own versions of algorithms? what made it hard to draw conclusions? etc).
> ok, you were on a team that did [really cool sounding thing]. What part of it did you do? What specifically did you learn from it?
Exactly this.
The memorable example for me was someone that worked on a military helicopter training simulator, and specifically, they worked on connecting the physical radios (used for internal communications between the crew) to the simulator. Sounds cool, involved network experience that was entirely applicable to what we were hiring for, and personally, I have a fascination with and have done a lot of interfacing of physical human-interface hardware to software.
However, the candidate was unable to explain details. What type of I/O hardware was used? Did you have to poll or did it push/stream changes? Did you run into anything weird with bad signals or missed state changes or restarts that posed a big challenge? No answers to this, because apparently they "just worked on the protocol".. but then couldn't remember anything about the details of the protocol (HTTP or something custom? Text-based or binary? TCP or UDP?). It was frustrating, because I remember otherwise liking this person.
They had similar responses for their most recent job (that they were at literally a few weeks prior).
I have a terrible memory, but even I can remember some of the biggest challenge highlights of my career. If I start talking about them, especially when asked probing questions, I will remember all the little details and could go on for hours.
I don't get it. Were they just coasting in the background, not really contributing anything meaningful, doing just enough to not be fired? Did the interview make them so anxious that they literally couldn't remember any details (it didn't seem that way)? Were they outright lying about what they did?
There's a trap that some small subset of seniors fall into where there's a lot of projects going on and their role becomes to play architect, provided minimal directions and manage expectations. Nobody tells them they're a manager and everyone thinks they're responsible for the development work that gets completed. Nobody else wants to manage the developers (the good ones ask tough questions about spec. they're "hard to manage"), but nobody respects this guy enough to treat him like a real manager.
(S)he spends so much time playing translator that there's no time to sweat the details. That falls to the team.
Exactly. Making a good hire is a management skill. Is it acceptable for managers to measure the productivity of their team by counting the # of line of code the team produce? or # of bugs the team fix? If not, why is whiteboarding with algorithm questions acceptable?
It doesn't sound like you fully disagree, because writing OSS isn't that curated path, but sadly none of the devs I know who focus heavily on algorithmic whiteboard question performance look at open source projects. And they could justify this with the same one you give: people lie, people copy/paste code...
I've pushed my team quite a bit away from where they were when they hired me in terms of whiteboarding focus, and we're better able to hire senior candidates now than they were back then. The BS detection is a big part of this: ok, you were on a team that did [really cool sounding thing]. What part of it did you do? What specifically did you learn from it? Even in aggressively paced interviewer loops you probably have at least 45 minutes to get them to answer those questions, if they're ducking and dodging, or answer back with wrong information, or with stuff that makes it clear they used the wrong tool for the job and didn't know how to find the right one, then there you go.
It's easy to present code, but it's also easy to copy or memorize code. Some of the most impressive candidates I've seen are the ones where the "let me ask you this stuff about your past experience" discussion expands to fill the whole time slot and we never actually look at any code, because we're too busy talking about how you took a service from a single database to a multi-master geo-distributed easy-failover alerted and monitored system, or how your random side project to look for Hidden Markov Models in baseball player's batting results went (how'd you store the data? what libraries did you use? did you implement your own versions of algorithms? what made it hard to draw conclusions? etc).