My favorite question to ask in software engineering interviews is one that I believe to be un-burnable.
> It's 2140 AD, New York is under water up to X feet high. Buildings have been retrofitted with <magical-ish material> to withstand the water. You are in charge of keeping your building dry. If water gets in and damages the foundation, a few thousand people die or become homeless.
> Design a system that ensures that doesn't happen.
How candidates approach this, how they think about redundancy, how they deal with additional constraints or extra scenarios thrown at them tells a lot about how they approach things. The question is not about software (explicitly) on purpose so that it gets people out of the coding mindset.
With more software-focused systems you always get candidates who starts writing code and designing objects and classes and stuff. No, I want you to design a system. Stop writing code.
The reason this question is unburnable is that there's no right answer. You're being asked to show how you work through an abstract problem and design a solution. You aren't being asked for an answer.
So far everyone who did well on that question has turned out to be a great engineer. Whether they were fresh out of college (and learned a lot fast) or they were already experienced.
This is a terrible question. Problem solving ability doesn't exist in a vacuum. Our experiences give us resources to draw from to combine in new ways that allow us to solve novel problems. Asking a software engineer to solve a problem in a dissimilar domain is badly missing the point of screening software engineers.
Sure, you may say that everyone who does well turns out to be a great engineer. I'm sure Google says the same thing about their algorithm trivia tests. Presumably the goal is to move away from these seemingly arbitrary and irrelevant tests. Just replacing one arbitrary and irrelevant test with another isn't improving the state of things.
I disagree. The goal of this question is to see how you utilize domain experts as a resource, add your software expertise, and design a comprehensive solution. It is specifically not a question about what you already know.
In the interview I play the domain expert.
This is exactly what your job will look like: collaborate with domain experts, use your software skills, solve real world problems.
I don’t need an engineer who can build a queue. I need an engineer who can use a queue.
Properly testing for a "solution engineer" (which is what you appear to be testing for) would pose a question that is a legitimate "software" problem and "domain experts" would be in areas such as say "electronic payments". Candidate would be expected to demonstrate ability to devise proposals for a 'functional solution to a business requirement' using available domain experts.
The key phrase is yours: "real world problems". Your hypothetical misses that by a generous mile.
And I apologize about this but we're discussing actual issues with software recruitment:
One major problem noted by senior engineers subject to these interviews is the competence level of the interviewer. I had this one guy, a "principal engineer", ask me about "Optimistic locks" as his initial query into my knowledge of concurrent systems. It took a lot of self-restraint to not blurt out "You mean optimistic locking?"
It is amazing to me that we somehow managed to hire very good software workers in the 90s without any of these shenanigans. One thing that does stand out from my memory of the 90s: we had senior colleagues (with literal white hair) in senior engineering positions. Go figure.
Seems reminiscent of the tales of early 2000s Google interviews, of the "estimate how many golf balls fit in an airplane" variety.
I'm not convinced this offers a useful selection criteria other than boosting your unconscious bias on who "seems smart", but on the other hand I'm also not convinced it's any worse than the standard modern Leetcode interview.
I don't think they were looking for the right answer; I think the goal of that type of questions is to see how does the candidate approaches it. It's especially valuable with college hires.
Some candidates will simply freeze if they don't know the answer. Some will try to estimate the dimensions and come up with an estimate. I think Google wanted to try to anticipate what a person would do when stuck by asking these types of questions.
Funnily that old style of question is far closer to my day-to-day as an engineer than a leetcode algorithms question. Most of my job involves figuring out solutions to fuzzy problems based on unknown constraints, undiscovered requirements, and often unclear end-goals.
"How would you fill this airplane with golf balls?" is a fantastic question. If the candidate doesn't reply with "Why? What are you really trying to achieve?", they're gonna do poorly in modern software development.
Caveat: This does not apply to junior positions where you are expected to bang out code based on super fleshed out requirements and constraints.
Why? Isn't the better question "how would you solve <the actual thing they are being hired to solve>"?
I hire people to track objects via computer vision. I ask them "hey, here's a system I want, what approaches would you take", and explain that this is a 3 year research project, of course they will be giving simplistic and wrong answers and there is knowledge asymmetry here, but I'll inform you as you go as to what works and not. "Optical flow". Okay, why? What's the general algorithm? Okay, so it turns out it doesn't work in our situation because X,Y,Z. Any ideas on what you'd look at next. And so on. Because that is exactly how my day-to-day conversations and work goes. Bounce ideas off of co-workers, latch onto something that seems promising, explore, maybe it works, maybe it fails. If it fails, what does that tell you about what to try next. Along the way you can have them code a tiny piece of something they mentioned if you want to see some code. But basically you are seeing if they understand the domain you are working in (which is not golf balls on airplanes), if they have some (not comprehensive) knowledge of the domain, and if they understand book solutions don't necessarily deal with the messiness of the real world and can adapt approaches to appropriately (i.e. a 20sec/frame algorithm is not going to cut it when I need 180fps).
It bothers me that this is still somewhat unfair due to the vast information asymmetry, but I try to deal with that. And my own biases can creep in. Are they mostly using 1980s style image processing techniques? Do they understand Bayes? Do they throw ML at problems that aren't tractable that way? It's unlikely that their past experience & preferences reflect exactly what we need. What is important - can they adapt, or even better, communicate why my choices are wrong and theirs are right (I don't need a robot to grind out code reflecting my ideas, I need them to figure things out and solve things in a fairly scientific manner).
So those are the questions I try to ask. I have no evidence that I get it right, but I haven't been disappointed with the hires.
Interesting you mention 80’s style image processing techniques. Please take care with these kinds of biases. Often older techniques are superior than ones that are merely fashionable today. I’ve also been around long enough to see pendulums go back and forth at least once in the AI realm, never mind tech in general.
Mind you, I’m not saying you’re wrong to discount people who fail to stay abreast in their field, just know that sometimes there is wisdom in ignoring popular trends, or revisiting old techniques that might benefit from a different landscape.
I think your approach is pretty optimal, but at larger companies the candidates might be interviewing for the company as a whole and not for a specific team.
I suppose it depends on how you grade the answers. Like I have bad spatial awareness in terms of how big things like planes are. I genuinely don't really have an idea how long a commercial airliner is, or how big a ping pong ball is. I feel like I'd do ok if I could get reasonable approximate values for things like the size of the plane, the balls, the seats, etc. if I also have to supply those values myself the end result is likely to be off
The end result being off is fine if I am being graded based on my thought process, but a disaster if I am being graded at having an idea of the size of airplanes before walking into the interview.
Like I said, the point of this question isn’t to seek your domain expertise, the point is to see how you use other people’s domain expertise to create a [software] system.
The way you ask it sure. I'm just not sure that is how everyone asks it, though maybe it is. I've never been asked something quite like that before but some comments I've seen around seem to imply some people want a close to accurate answer.
Me: "I'm a software developer, so I would let a material science engineer figure it out."
You: "You can't do that... All the material science engineers have drowned."
Me: "That's not very realistic..."
You: "Thank you for your time. Don't call us, we'll call you."
"Ok the material science engineer gave you the fancy material. It was applied. As you know, nothing is perfect and lasts forever. How do you ensure your building doesn't sink?"
I'd let the material science engineer figure that one out too. No way in hell would I risk my ignorance of building maintenance be the cause of thousands of deaths.
One thing I learned in getting my engineering degree is that it’s unethical to practice engineering in an area you don’t understand. No way would I ever try to apply my software engineering skills to a materials problem
I feel it's only unethetical if there are "real" stakes on the line. E.g. Where people can get hurt. I don't mind a mechanical engineer writing code for a company that makes the zillionth vapid webapp nor do I mind a software engineer designing a mechanical watch.
Use <magic material> to build a wall around the city? Have the people who maintained each individual building form a team that can keep an eye on sections in shifts? If we have enough magic material, build a double wall system for breaches?
> Use <magic material> to build a wall around the city?
That sounds expensive. Do we need to do that? Does it solve the problem better? Does it maybe create a worse solution? How would you find out?
> Have the people who maintained each individual building form a team that can keep an eye on sections in shifts?
How would you make this less time intensive? Can you use automation?
> If we have enough magic material, build a double wall system for breaches?
Is there a cost-benefit analysis you can run here? How would you find out? Should we keep adding walls ad infinitum or does each additional wall have diminishing returns?
It would be probably cheeper - simply because the perimeter would be shorter than the sum of perimeters for most of buildings included in the city/district. That is what cities did since at least Neolithic til moment when cities become undefendable (because of cannons and airstrike).
But that is probably not the point. In order to build wall around the city, you need to have power or consensus in society to deliver this decision. And achieve that in reasonable time might be unreal for someone in charge of one building.
I can't tell if you're asking these questions to elicit a longer discussion, or if you are doing so because you don't like the idea of a simple solution.
It’s unfortunate that this bizarre and arbitrary stab at analytical thinking assessment is still used to evaluate the skill of software developers.
Here’s another idea: have them write software. Not algorithm trivia, but actual software. Keep the scope small so there isn’t an onerous time commitment and have them explain their choices.
I would love to ask questions like that. I'm pretty sure hiring committee would not like it though lol.
I do worry about asking questions that give candidates extremely large advantages if they have certain backgrounds. For example someone coming from mech, civ, or petroleum engineering will get a huge leg up on that question.
It is also worth noting that the structure for interviewing at a company with 30-50k+ engineers is different than the structure you need for interviewing at a company with <<1k engineers.
IMHO, this is not particular better than an algorithm question.
Why not just summarizes the traits of the technical skills you expect from the candidate, lay it out, and come up appropriate questions for each interview?
General software engineering interview does not work. But there can be more specific measures to improve the experience.
> Why not just summarizes the traits of the technical skills you expect from the candidate, lay it out, and come up appropriate questions for each interview?
That won't work; it's aimed at a different kind of skill.
The skill the GP is looking for is ability to solve a problem you have never seen before, for a problem that bears little resemblance to anything you've done before, by transferring your existing general problem solving skills. It's a test of your ability to solve new things, which is a capability the company finds useful.
This is a very useful skill, and you can learn to do it better, but it's not a "technical" skill as we usually mean it. However it is one of the things which might be associated with "great engineer".
Listing technical skills and testing them will not tell you if the candidate has developed the above capability.
The GP isn't asking a very good question to identify that skill, is the point. Interviews in other technical industries don't ask these kinds of questions, because they're not really useful as a gauge of, well, anything useful.
> It's a test of your ability to solve new things, which is a capability the company finds useful.
Correct.
I work with early-ish stage startups. There's always a library that solves any given known problem. We don't have the scale or nuance to need to reinvent the wheel.
Where I really need engineers to shine is in solving the parts that don't have a library because we've stumbled onto something new. Or at least something new to the team. Or the way we've cobbled libraries together creates something new.
The important work is the work you haven't done before. We're engineers not line cooks.
November should involve completely different work and a whole new set of problems than February. If you're still doing the same thing you did in February, something's gone wrong.
> It's 2140 AD, New York is under water up to X feet high. Buildings have been retrofitted with <magical-ish material> to withstand the water. You are in charge of keeping your building dry. If water gets in and damages the foundation, a few thousand people die or become homeless.
> Design a system that ensures that doesn't happen.
How candidates approach this, how they think about redundancy, how they deal with additional constraints or extra scenarios thrown at them tells a lot about how they approach things. The question is not about software (explicitly) on purpose so that it gets people out of the coding mindset.
With more software-focused systems you always get candidates who starts writing code and designing objects and classes and stuff. No, I want you to design a system. Stop writing code.
The reason this question is unburnable is that there's no right answer. You're being asked to show how you work through an abstract problem and design a solution. You aren't being asked for an answer.
So far everyone who did well on that question has turned out to be a great engineer. Whether they were fresh out of college (and learned a lot fast) or they were already experienced.