I interviewed for a Product Manager role at Google and my experience was awful. Put this things in perspective, I was a Director of PM managing a team at my current role and working a lot with customers, presenting in speaking engagements etc as part of my day to day.
I get into the interview and the person on the other side seems to know very little of my background. He says he is a PM and starts with how much is google's spend on storage for youtube on an annual basis. Knowing very well, I walk through assumptions like the average youtube video size, no of formats based on screen res and video quality etc etc and give him the logic. He pauses and says give me a dollar value. He doesnt want to understand the logic behind the calculations. Anyway, next few questions are more of the same.. code optimizations etc etc. After 3 or so questions, we were done. No, do you have any questions for me. No customer related discussions. No what I have done in the past and how I've been successful.
I feel like these kind of interviews are not judging what the person brings to the table, rather do you know what I'm gong to ask you and that's all that matters.
I always look for 2 things in any interview. Are you smart and motivated because nothing we do is rocket science. If you are smart and motivated, you will succeed. The other is, will I (and the rest of the team) get along with you. Teams need to work together and people who lack tact in personal skills end up being very difficult to work with.
I feel like this hits the nail on the head. They aren't looking for smart, motivated people who will work well on the team. They think they are, and that's what they say they are doing, but large companies have a hard time really doing that. There are so many other motivating factors and interests in the interviewers, the committees, the corporate structure, etc. If you are running your own company and building the product and you know you are going to work with a person you are interviewing, you can select, consciously and unconsciously, for all kinds of things that are important to you as an individual and you are free to pick the person you think has the "smarts you want", which might be technical, personal, emotional, etc. And which might compliment your own skills. But there are all sorts of strengths, and you can be biased enough to select for the ones that work with/for you. in a massive company, you get a watered down set of some sort of skills selected for by a competing line of somewhat disinterested (which can be a bad thing) people trying to meet some "objective" one-size-fits-all criteria.
He asked you for "how much is google's spend" and you finished your estimations without giving him a dollar value? Did you forget his question?
> He doesnt want to understand the logic behind the calculations.
From your description that sounds like a false assumption to me. It sounds like despite your estimations you didn't give him an answer to his actual question, and so he had to prompt you.
>He asked you for "how much is google's spend" and you finished your estimations without giving him a dollar value? Did you forget his question?
He didn't say he finished without ever giving him an answer. He was simply explaining that he tried to give an explanation of calculating the amount and the interviewer cut him off and asked for a number. Its absolutely stupid that the interviewer for such a question would only be interested in the number and not the candidate's thought process for arriving at that number. For a question like this its also absolutely reasonable for the candidate to assume that the interviewer wanted his thought process, because from an objective standpoint, that's the only scenario where this specific question wouldn't be a complete waste of time.
>From your description that sounds like a false assumption to me. It sounds like despite your estimations you didn't give him an answer to his actual question, and so he had to prompt you.
This sounds like an assumption on your part. You might be right, but you could also easily be wrong.
I'm fairly sure that he was speaking his calculations aloud in the process of reaching the dollar value when the interviewer stopped him and asked for an answer.
It's a great interview question in the sense of a Fermi problem (I was once asked by a startup "how many window washers are there in [CITY]?"). The premises you choose and how you evaluate them, however, are so much more important in a Fermi problem than a correct answer. I remember being about 5x off but it was only because of a drastic underestimation on the time it takes to clean a window.
Fermi problems are terrible questions when asked about things the questionee has no concept of. The point of a Fermi problem is to make an educated guess, and doing so requires knowledge of the problem space the guess is being made in. Pulling random constraints out of your ass for a domain you do not know does not demonstrate this ability.
If you're saying that in regards to the window washer question, I actually enjoyed it!
The point of a Fermi problem is to arrive at a reasonable estimation for a fairly unknown value by extrapolating and connecting from known values (by known, I mean there's a more narrow lower and upper bound).
I wouldn't say that a Fermi problem about how many window washers are in a city is a terrible question; it is more challenging than something in which you already have domain expertise, but that just makes you have to extrapolate further, which is the real point of the Fermi problem. In fact, pretty easy (and knowable) starting points are the population of the city, windows per individual, etc.
Doesn't a Fermi problem without domain expertise display more critical thinking and reasoning style, while a Fermi problem with domain expertise is less of a Fermi problem, and more of a knowledge test?
> Doesn't a Fermi problem without domain expertise display more critical thinking and reasoning style, while a Fermi problem with domain expertise is less of a Fermi problem, and more of a knowledge test?
A Fermi problem is both a test of knowledge and a test of reasoning skills. The types of estimates Enrico Fermi was known for were only possible because he had the domain knowledge for his reasoning to leverage. If you remove the domain knowledge from the problem you remove a significant amount of the signal from trying to concoct the estimate. It is a much easier problem if you can just make up numbers rather than infer accurate guesses from the domain.
And how would an outsider have any idea of the cost do you just mean the plant costs how much does google pay per MW in each locale how much does labour costs what allowance for accrued pension rights.
Costs to Google are not magically different. You can make estimates without insider knowledge, but as in the window cleaner example, your estimates will be as bad as your assumptions.
You can also make estimates for compute, network, and storage costs based on the prices Google charged its Cloud customers for the same.
You don't. The exercise is in estimation. This is specifically not a case of the interviewer looking for you to get the "right" answer. The interviewer likely doesn't even know what the right answer is. They want to see if you can make back-of-the-envelope calculations and if you're capable of making sane (if inaccurate) assumptions.
Make a guess at total cost for an hour of compute time and how long it might take to transcode the average video. Guess at how many videos are uploaded on a typical day. Guess at how much the typical SRE costs Google and how many SREs YouTube employs. Do the same for software engineers, or explicitly exclude R&D. Guess at networking, storage, etc. Then roll all that together with some hours of video * (cost to transcode + cost to storage + cost to upload + cost to playback * average viewers) + sre cost +.... Bonus points if you can account for elasticity and peak load instead of just averages.
The point is to show that you can think through the problem. If all you can say is "I don't know what your networking costs are", then you come across as useless.
He's not a new grad, he was a director of PM. He should have a feel for ballpark figures regarding infrastructure and personnel costs, which don't vary by that big a factor from company to company.
The question is perfectly reasonable (and it sounds like the interviewee was providing a reasonable answer). The issue is the way the interviewer ran the interview, not with the particular question itself.
I can't speak for rlpb, but it's how I read it, albeit not with much confidence. Why? Because "he pauses and says ..." doesn't fit with the interviewer interrupting and not waiting for the answer to be finished -- there must have been enough of a gap for the interviewer to pause and then ask that question.
Hey man, it looks like you are trolling all of my comments from days ago because I once noted that you had changed the entire text of one of your comments after it had already received replies.
This behavior of yours is weird and obsessive. Maybe reevaluate what it is you are trying to accomplish here and in life.
I think the thinking behind not asking about your background and looking at your resume etc - is that these things can introduce a bias. By asking for particular types of questions and training their interviewers in a particular way they are trying to reduce bias and streamline the whole process.
In the truest sense of the process, an interview and hiring/not hiring decision IS discrimination. You are selecting for traits you (in theory) want, and (in theory) rejecting traits you don't want. Well, you are doing it if your interviewing process is not broken.
If you simply try to elide all bias, then you would never be able to make a decision.
All anecdotal evidence points to arriving at the correct, optimal solution as being key to passing the interview. It also seems like often times the interviewers are not even going to be working with the candidate so their opinion on a 'working relationship' is mostly irrelevant.
The objective of asking these leetcode style questions is to find candidates who are willing to put in the time to study. Success signals that this person is willing to commit to performing well at something that is reasonably challenging. The end result is that they are trying to hire worker bees. This makes sense as the bulk of work at any large company is largely mundane and relatively routine. I'm willing to bet that when a company wants to hire a two sigma candidate they don't go through all this nonsense, although at that point the candidate is already well known in the industry most likely.
It is true that you will almost never work with somebody who interviewed you.
In some ways, this is absolutely terribly for hiring. I worked in a part of Google where domain specific knowledge was key, and it was next to impossible to hire people because nobody on our team could interview them "officially". So we'd pre-screen people with the knowledge we needed, and then pass them off to others to officially interview. They would almost invariably fail, because their non domain-specific knowledge was not in the top 1% or whatever it takes to pass a generic interview. So we were left with the choice between hiring contractors, or training somebody internal who could leave at any time. We hired contractors.
EDIT: See the sibling comment regarding the homebrew author. This situation was exactly like that. Imagine wanting to hire somebody to make an internal package manager, and not being allowed to hire the author of one of the most popular package managers because his whiteboarding skills were lackluster, so he got a poor interview score from people that have no idea (or concern) what he does or what he's being hired to do.
> I worked in a part of Google where domain specific knowledge was key, and it was next to impossible to hire people because nobody on our team could interview them "officially".
When a company policy is so obviously broken in this way and there is no way to route around it then something is very broken.
Here's how it should work. You explain the issue above to someone above you and they either have the authority to get round it or they pass it on to someone who does.
If a policy is failing in such a profound way and nobody can change it this implies a level of organisational dysfunction that must be affecting multiple aspects of the company.
One way to route around it is to acquire a company with the needed expertise. However, it needs to be a large-ish company. There is some cutoff beyond which engineers from acquired companies do not need to re-interview for their jobs. It is somewhere between 10 person startup and Motorola, but I have no idea where.
I don't think a way to route around it for an individual position or candidate exists.
You should see if there's way to circumvent HR or even have them work with you on hiring candidates. At my company, all engineers are VERY involved in the hiring process when it comes to new additions to the team.
My experience confirms this as well. I’ve interviewed at most FAANGs and many others, and made multiple attempts at some. Brute-force practice (e.g. leetcode problems) and memorization is the only thing that ultimately worked.
It would be nice if interviews really were a “collaborative” experience to see “how you think” where finding the optimal answer “doesn’t matter”. But it 100% does.
Years ago, I had the rare experience of being told why I didn't get an offer from a big Valley tech company. (Not FAANG, but perceived similarly.) It was for an internship, and I suspect they wanted to keep candidates interested for full time applications.
The relevant interview was a "find the palindrome" question for which the desired answer was Rabin-Karp. The rejection was explicitly "your solution to the whiteboard algorithm question worked but wasn't optimal, so study your algorithms harder and try again in the future".
At the time, I thought it was pretty stupid that they tried to judge skill by requiring candidates to either have one random thing memorized or reinvent a publication-worthy algorithm in 30 minutes. But given the advice I got, it seems like the core motivation was "we want people who are willing to devote a bunch of free time solely to impressing us". And for that, asking impractical questions and demanding optimal answers works perfectly.
As someone who conducts interviews frequently on behalf of Google, I'm in a position to categorically reject your claim that someone with {PRESTIGIOUS_COMPANY_X || INDUSTRY_REPUTATION} is given an easier path. Please note that I haven't interviewed anyone with 30+ years of experience (yet) and I only interview candidates for SWE roles (not Research Scientist roles) so view my experience from that lens since because I still think it's applicable to a majority of the crowd here on HN.
Everyone (regardless of background) has to sing for their supper, so to speak. Sure, having good pedigree can make it easier to get scheduled for an interview, but that in no way means you're slated to have interviews with people who will handle you with kid-gloves. On the contrary, the whole process is designed to zap out any bias and cover a large amount of breadth. Interviewers have questions they're calibrated themselves on, and if they get picked for the loop they just usually ask those questions. This is true from the phone screen all the way through the on-site.
You may feel that established engineers cross-pollinating from other competitive companies don't have to jump through as many hoops but that's only because they've been through this rodeo enough to know how to prepare for it well. Many are hobbyist competitive programmers and people who build stuff on the side so the interview questions don't blindside them completely.
I wasn't implying that the interview was easier for candidates with recognizable track records. Perhaps two sigma was not restrictive enough. I was merely suggesting that if a candidate has domain expertise and it is well known, that they may not subjected to the same leetcode hoop jumping as the bulk.
Do you really want to remove bias for experienced candidates with a track record of substantial and nontrivial contributions?
> I was merely suggesting that if a candidate has domain expertise and it is well known, that they may not subjected to the same leetcode hoop jumping as the bulk.
To put in bluntly, if you are such a candidate you are expected to know your algorithms cold and be expected to field your domain specific questions. So if you're a well-known compiler designer you would still need to know how to wield algorithms/datastructures for parts of the interview loop and then get into detail about compilers in the domain-specific parts of the loop.
That being said, it totally depends on the position you're interviewing for. If you're interviewing for Director/VP role in an engineering ladder, then some of the interview will need to be repurposed to getting signals about leadership and impact. That doesn't mean that you don't code at all though, it just means that the focus/weightage will be moved towards other dimensions (in addition to evaluating your system design and coding skills).
> you are expected to know your algorithms cold and be expected to field your domain specific questions
...why?
You're holding candidates to high standards, fine, that's great. Compiler designers even need some deep algorithms and datastructures knowledge; you can't ensure your hashes are thread-safe unless you know exactly what they are, how they handle collisions, and so on. All that's totally fair to ask as part of their domain knowledge.
But the criticism of this sort of interviewing (and Google in particular) overwhelmingly describes interviewers judging domain experts on whatever algorithms puzzle they give every candidate from every domain. Why do we care whether a compiler design expert remembers how to implement Ford–Fulkerson on a whiteboard? Is there any reason to think that's predictive of talent? Might it even be anti-predictive because it favors generalists?
Google (and FAANG, and everyone else in that league) hires great people. I don't doubt that. But it was also hiring great people back when it posed random brainteasers and emphasized GPA, and Google has publicly said those things turned out to be totally uninformative. If you filter down to a list with more highly-qualified candidates than you can hire, I think there's a tendency to come up with random extra hurdles to avoid resorting to a coinflip that might work just as well.
Without getting too pedantic, it can be difficult to define what encompasses "expertise". You could have worked on some backend payment system for years and conclude that you're a payment "expert". But that means different things to different people.
Not all current "experts" are necessarily hired into corresponding roles needing the same expertise. To put it more concretely with an example, it's entirely possible for you to be a payment system expert but have a recruiter reach out to you for a general SWE role (you're welcome to decline). The slate of roles that do require said expertise are limited. If there's 10 qualified candidates, they could all technically pass the hiring bar. But if there's only 7 positions that require that exact expertise, then what happens to the other 3? Well, they're broadcast to "matching" teams that have available headcount. This "match" could be approximate, but it underscores why getting a signal beyond just the domain expertise is important.
We could of course debate whether something "advanced" should be asked, but that would require us to agree on what constitutes "advanced". In general, it's difficult to pin point some threshold and unanimously agree that that's the level of difficulty of generalist SWE questions that a domain expert should get asked. For what it's worth, there's additional intervention by committee(s) that look at this on a case-by-case basis to make sure that everything's on an even keel. Concretely, if you're an ML expert
interviewing for a role that specifically demands ML-expertise and you were asked some advanced question about data structures (ex: something needing the Hungarian algorithm) and you're upset that you bombed it, the don't be; the committee(s) would weigh that interview's score less if they expect something alone the lines of Maps/Queues to be asked instead. These are all made up examples, but I hope that it conveys why there's a need to extract signal beyond just core-expertise. Google doesn't just artificially create hurdles for sadistic pleasure ¯\_(ツ)_/¯
Disclaimer: I haven't read the article, but I work at Google.
Most interviewers will ask the same question to multiple candidates. They will compare how you the candidate is doing compared to other candidates on the same question. Most interviewers will provide hints.
I'm pretty sure that all candidates go though the same interview process.
There is really not much that is unique to Google's interviews. The only thing that is unique about the process is that your interview feedback is reviewed by a committee of peers.
> There is really not much that is unique to Google's interviews
Having just gone through 10 days of on-site interviews (including two at Google), there are some things that are peculiar Google's process:
- No talk about software engineering,
- No talk about software design,
- No talk about project management, working on a team, culture of any kind, caring about customers etc etc...,
- No debugging,
- No using unfamiliar APIs, no reading documentation,
- No actually running any code.
In particular, Google was the only place I didn't actually test and run my code. It was the only place where I was given the option of doing all of my work on the whiteboard, too.
At least one time I had convinced my interviewer I had a working solution and then realised it didn't work and had to convince them of that...
> It was the only place where I was given the option of doing all of my work on the whiteboard, too.
I just interviewed at Facebook, and I was only given the option of doing all my work on a whiteboard. I hate that medium for writing code (or any dense text, really).
Hmm. I programmed on my own laptop (and ran the code) at Instacart, Apple, Airbnb, Dropbox, Stripe, Cruise, Flexport and Benchling. Late September/October this year.
At Google I wasn't allowed to bring my laptop, but they let me "program" in a text editor with syntax highlighting and automatic indentation. Most of the interviewers hadn't seen that before though.
I interviewed at Facebook, Snapchat, Apple, Pinterest, LinkedIn and startups. The coding interviews were exactly the same. This was less than 3 years ago.
The first two depend on the level you are interviewing for. If you're going for senior positions, you will have system design interviews.
The others, ehh, practically every interview I've conducted has had some debugging when people encounter issues in the code they wrote. Sure you're not spelunking logs or stack traces, but reasoning about what causes an issue on a smaller scale is still useful signal.
'I'm willing to bet that when a company wants to hire a two sigma candidate they don't go through all this nonsense'
Can testify to this, interviewed via referral, was still asked some algo-trivia, didn't do too well on that but still got a decent offer, mainly because referee vouched for me based on their experience working with me.
Either (a) they want you or (b) they want someone like you. If (a), they find any reason for hiring you, if (b) they find any reason for not hiring you. This explains most of the interview dynamics, especially at large companies.
I always see this trotted out as proof of how bad Google’s interview process is, and I always wonder: what makes people think creating homebrew was particularly difficult or impressive? Package managers are a dime a dozen.
It has also never been conclusively explained what “invert a binary tree” means (see the tweet a sibling comment linked to — that’s what the Homebrew guy claimed he wasn’t hired for not being able to do).
I realize you asked about people in general and not Howell himself, but he said[1] almost exactly that about his product:
Well, no I didn't [write something worthy of Google]. I wrote a simple package manager. Anyone could write one. And in fact mine is pretty bad. It doesn't do dependency management properly. It doesn’t handle edge case behavior well. It isn’t well tested. It’s shit frankly.
But he goes on:
On the other hand, my software was insanely successful. Why is that? Well the answer is not in the realm of computer science. I have always had a user-experience focus to my software. Homebrew cares about the user. When things go wrong with Homebrew it tries as hard as it can to tell you why, it searches GitHub for similar issues and points you to them. It cares about you.
I think the fact that he was able to recognize all the faults (and can therefore correct) in his package manager shows a great level of competency. The fact that he was able to productize his learning experience in the form of homebrew makes him all the more impressive.
I love how you belittle one of the most successful package manager systems out there..
We can apply the same logic to almost anything.. Because nothing is truly original and "difficult/impressive" are pretty broad metrics. Feel free to give us your definition.
Software engineering is about building functional systems for humans. Quite frankly, Google needs to hire more people that are able to deliver products that actually work.. From the looks of it, hiring people that can solve "impressive" and "difficult" sorting algo puzzles yield to the Sluggish Gmail mess we have at the moment.
Maybe google needs to start hiring competent people instead of anyone that can recite the corporate gospel.
in my job i've dealt with dozens of Google engineers across a few teams... 95%+ of the time I don't get useful answers, and sometimes no answer at all. We have found many bugs in their systems, as well as broken API endpoints that did not work as advertised. Dealing with one issue right now that no one at google has been able to assist with in over 12 months. Of course with all the turn around on some teams, the issues just keep getting passed to the next guy.
they are by far the worst of the FAANG-level engineering teams we deal with, competency is severely lacking.
Package management is a task filled with subtle pitfalls that is much, much harder than it looks on the surface.
It's a lot like syncing files - e.g. how Dropbox looked similarly deceptively easy to build but really wasnt.
IMHO the fact that the homebrew guy didn't get hired and that golang package management was a dumpster fire for years isn't coincidental. Both were part and parcel of a systemic bias that plagues Google's culture that they're blissfully unaware of.
Google doesn’t do package management well because they don’t do package management at all; they have a monorepo. If anything, the systematic bias is that Google built their way around an entire class of problems that others still face.
This has something to do with why the Go team waited so long to solve package management. Having little experience with it, they tried leaving it to the community.
But I don't think you can generalize to the whole company. For example, the Dart package manager is pretty nice.
I agree, to some extent. It's not terribly difficult to make a package manager. But it's a strong positive signal: it's hard for someone who is really bad at software to design a popular system like that. Also, it's unlikely for someone who is a really bad employee to take the initiative to even execute a task like that.
Regardless, I don't think qualifications like this should bypass the interview. The author almost sounded a little entitled when describing the rejection. That said, I don't think his achievements got sufficient weight while evaluating him as a candidate.
See my comment [1] regarding what these interviews are selecting for -- it might be a bad process, but it might come closer to the intended goals than other processes do.
Why shouldn't major accomplishments bypass the "random algo from a hat" that Software Engineering interviews have devolved into? If someone is a core contributor to a large, highly successful open source project, you have concrete proof of practical experience that you can discuss with the candidate. I'd put 100x more weight on that versus his ability to invert a binary tree on the spot.
There are more people in the world using homebrew than what most of Google engineers single handedly wrangled or produced -- excluding google products that came out of Google Labs that we now use. So if you want someone who can take shitty lemons and create a tasty lemonade out of it he is definitely your guy.
Here's the thing - outside some special projects for which people are hired without going through the standard interview process Google does not want those who can take shitty lemons and convert them to tasty lemonade. Google wants specific kinds of butts that fit into a specific kind of seat. It wants the most homogeneous high output monkeys that would do what they are told, such as spitting out a code to invert a binary tree.
You can also set up cloud storage quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. Yet, the Homebrew guy went out and made a package manager actually getting used but still couldn't get past Google's abysmal interviewing process.
It’s totally fair to slag on whiteboard interviews, I hate them too. But it’s worth pointing out that there aren’t any great options out there.
The traditional sit and have a chat interview is insanely biased towards people like the interviewer and doesn’t at all guarantee technical competence. The take home project feels like a big imposition to a lot of people, especially those that are currently employed, and raises concerns about cheating. The whiteboard interview except on a laptop has some advantages but has problems of comfort with the environment.
I think a whiteboard or laptop interview with a cleaned up realistic problem, not a thinly disguised implement Dijkstra’s, is the least bad option around.
Strongly prefer the take-home project as long as it is confined to less than 4 hours. The gun-to-head aspect of the other choices significantly dominates other concerns in my experience.
I've seen people who otherwise did well at the coding parts of the interview not get an offer because they came off as a jerk, and no one wants to work with a jerk.
There's way more to hiring decisions than raw engineering talent. Teamwork matters a lot too.
Interesting. Two of the interviews in my onsite loop at Google felt test; one of them was incredibly disinterested and unhelpful - by far the worst interviewer I've ever had.
I haven't bothered with Google this time around - combine the risk of jerk interviewers with the glacial pace of their interview process and it's not really something I'm interested in doing. And that's not even considering any personal reservations regarding the Damore firing or the massive payout to Rubin.
Anecdotal evidence from an interviewer. Maybe 5% of candidates hit the optimal solution for my problem. It is absolutely 100% not necessary in order to do well in my interviews.
> I'm willing to bet that when a company wants to hire a two sigma candidate they don't go through all this nonsense
As a commenter mentioned in a different thread, two sigmas get acqui-hired. I remember reading about FB acquisition of Instagram/WhatsApp, there was a requirement that they had some understanding of cs concepts.
> Your interviewers try to understand what it feels like to work with you on a daily basis.
If that were true then why not simulate those situations rather than riddles, google-able CS trivia, or whatever the interview flavor of the month is?
I'd actually argue that for many companies this post is true (i.e. that getting it "right" is less important than the journey) but I still won't forgive companies that design the most hostile interview questions possible, and are then surprised when interviewees complain.
You can read the types of questions Google asks here:
I'd never interview at Google or any other company that operates this way. If the very first interaction I'm going to have is off-topic trivia, we're done, the company failed MY interview. The questions Google asks are disrespectful and unprofessional.
But disrespectful and unprofessional interview questions ("why are manhole covers round?") has become the new normal in this field.
5-6 years ago, I went for an interview in a real estate company. The introductions lasted all of 2 mins. Then they took me to a computer, showed me a bug in the code base that I would be working on, if I got hired. Then they said "please fix this". Took me about half hour or so to hunt the bug down and fix (it wasn't hard, but it wasn't a cosmetic bug either). Then they asked me how I found the bug. I explained, they told me I'm hired, and the paperwork will follow in a day or two. No other questions were asked, they didn't look at the resume at all.
The entire experience lasted less than an hour - from the time I walked into the reception till I walked out. Best job interview ever!
This codebase was probably 0.0000001% as complex compared to Google's (I can only imagine). Still, there is no reason more companies can't follow this style, at least for one of the several "rounds" of interviews, instead of whiteboard, trivia etc.
My last company was accused of this constantly. We would ask a very contrived, simplistic problem that was themed along with the domain of our business. Most good candidates would know this and solve it quickly. Many awful candidates would accuse us of using candidates for "free labor," and threaten to report us to whatever agency they think deals with this sort of thing.
I've seen this accusation thrown around dozens of times. I'm convinced that it's not really an actual issue, and just another way for bad candidates to transfer blame elsewhere.
That would be hilarious. Most of the value of a developer doesn’t straight out come from the code they wrote, but everything around that (design, maintenance, hardening). Getting free code isn’t actually that helpful without a lot of other stuff around it.
Not that you are wrong, but you are underestimating the dumb assholes who assume they would be cruising right on path only if someone could solve this tiny niggling problem for them.
I had to tell one moron on phone interview that they are looking for free consulting or what because they keep harping a very specific project (Java/xml etc) setting that they were struggling with. It could of course be solved in hour or so by closely looking at product documents instead of endless googling.
I think you misunderstand the definition of hilarious. This is not an attack, but an observation.
When you are the guy who has to meet payroll for your team, and some (insert appropriate NSFW description here) person/company deprives you of pay for the work they are taking advantage of to further their goals ... think how that makes you feel.
There are many adjectives. None of them are synonymous with hilarious.
I don't know if you are joking or serious. How much work do you think you can extract out of a normal developer in an hour or less, especially when he/she hasn't seen the codebase before? And how much effort do you think it would take to schedule the interview in the first place, starting with posting an ad, then screening resumes, scheduling time...? Not to mention someone has to vet/test the bug fix ...
This doesn't seem reasonable to me at all.
On the other hand, I can believe if this happens with those "take home assignments". Those tend to be larger (few hours to a few days) amount of work.
BTW, at least in this case, I did get the offer as they promised, within 48 hours. So I don't think they were doing anything shady. My guess is that they simply got tired of normal style interviewing and found this to be an efficient way to hire.
I gave 4 examples of this in an above comment. Yes, it happens. Yes, its wrong. No, people and companies don't care.
At the end of the day, this is what sucked the optimism out of me at my previous company. We had lots of people trying to steal from us. Even after I closed it down, I had two people in particular, ask me to help them design something, or give them detail domain/design knowledge, for free.
Yes, it happens. Far too often. Makes people like me jaded.
I had this happen in a previous life, when I was working on designing/building/selling HPC and storage systems. Two of the worst offenders were universities, one was a prop-shop, one a semiconductor processing firm.
One uni called me up to tell me they liked my bid, but they wanted me to teach another company how to to what we could do, so they could buy from the other company.
The second uni issued an RFP, required that they get to keep all the docs, including detailed design specs. After interviewing all the companies, mine included, they selected 2 finalists, us and someone else. They probed us hard for details. Wound up buying our design from the other guys.
The prop shop did a similar thing, though we won the RFP. But then custy went silent while working on PO. They then awarded it to our competitor, as long as they used our design.
The semiconductor firm had a problem they claimed they could solve internally (they couldn't), and wanted to see what we could do. We met all of their (aggressive) performance, capacity objectives. But they kept pumping us for "free work" with the promise of a very large contract later. Against my business partner's recommendation, I called them on it, indicating that we'd be happy to work on the project with them. But not for free. So they "fired" us, as in asked us to stop working on it. Now, 4 years later, they are still struggling with the task.
These seem to be contract negotiations rather than tech interviews, and you learned important lessons about protecting yourself from unscrupulous actors in the process.
> they wanted me to teach another company how to to what we could do
Here one might triple? their rate for a much shorter hourly design contract and leave implementation to someone else. The other two examples might have creative solutions as well.
How is this useful? The interviewer still needs to evaluate the code and conduct the follow-through questionnaire. They could just spend that time fixing the bug instead.
Netflix hired me in a similar fashion. I would also characterize it as the "best interview ever".
I had a two-hour session with senior tech leads solely about solving a specific severe scalability issue in one of their database systems that they had not made much progress on. I showed them how I'd solve it using a technique unfamiliar to them. Next day, I get a "when do you want to start?" call -- they tried my proposed solution in a test environment and it had worked as advertised.
Two hours solving one real-world design problem in my area of expertise. Simple and on point, without any trace of culture fit, leet code, random whiteboarding puzzles, etc.
My current company does this. But instead of leaving the interviewee on their own, we do it as a pair programming session where they are the brain and the interviewer is the hands implementing it. Gives you a sense of how they think and a better look into how they work day to day.
Why are manhole covers round, and similar brainteaser questions have been banned at Google for ~5 years at least. They're only the new normal if you're a decade behind the curve.
I can confirm that no one asked me a brainteaser or gotcha question. They did, however, ask me irrelevant questions. My system design question involved a web based system. I am an embedded guy with no web experience who was interviewing for an embedded job. I would have been happy to design something in or near my skillset, but I have rarely worked with load balancers, backends, schemas, etc. I did my best and tried to have fun with it, but my interviewer was clearly not amused and seemed slightly annoyed that I didn't know anything about his domain. I was given preparation materials which did not mention anything about web architecture or design, only things relevant to my skills.
The opposite happened to me at triplebyte. There was a whole section on embedded systems and C. My interviewer let me know the section was coming up and I asked if I could just say "I know nothing about embedded systems or C" so we could just explore my web dev knowledge further, but no go. Very weird. I have nothing about either on my resume, and no desire to target any jobs that need me programming anything other than webapps in JavaScript for at least the first couple months...
Triplebyte is supposed to be resume-neutral, so it is reasonable they would try to ask you about it. Surprising they didn't take your word for "I know nothing about embedded systems or C" though... Maybe the interviewer has to go through the motions of asking you everything so that Triplebyte has a complete profile on everyone's performance on every question for their internal data analyses.
My link from Glassdoor has feedback from interviewees from as recently as days ago. You can see the types of interview questions Google is asking right now.
The sentence you're referring to was in the context of the FIELD which is why it said as much, and the specific question is a very well known stereotypical example:
> But disrespectful and unprofessional interview questions ("why are manhole covers round?") has become the new normal in this field.
If you browse my link you'll see that Google continues to ask off-topic questions, even if they aren't riddles or brainteasers.
Why don't they ask questions relating to the job? Why don't they do practice tasks that you'd be expected to do on the job? Those are the core issues.
"If you browse my link you'll see that Google continues to ask off-topic questions, even if they aren't riddles or brainteasers.
"
I have read literally thousands of interviews at Google (both on hiring committees for 10+ years and in the group that reviews hiring committee decisions), and i just don't encounter them much.
I can't even remember the last time i read this type of question.
I suspect your definition may be different than mine.
I do see that new grads/folks without a ton of experience get asked more questions to test fundamentals (not riddles), and folks otherwise get asked to just solve problems.
I have also seen occasional a few unscored warmup questions that are more abstract/riddlish for nervous people, but even that's pretty uncommon.
It's usually small talk about resume instead.
Interviewers are also deliberately pushed to ask different questions if the candidate is not doing well (though it's slow to change interviewing behavior on this). Rather have signal on more questions than knowing that they really did badly on a single thing.
That means they may change style/abstractness of question depending on how interviewee is doing.
The problem is off topic questions, especially for more senior developers who have specialized extensively. For example, I haven’t used or seen used dynamic programming since college because my area doesn’t really use it. If I didn’t cram in interview prep for these questions, I’d be toast.
I wish programmers were treated more like designers with a portfolio review. I’ve implemented so many cool things, yet somehow I’m judged based on some esoteric programming problem. It’s like I’m an interchangeable cog who is being evaluated on how well I can be a cog....heck, HR and often even interviewers often don’t bother looking at your CV. Dammit, I do compilers, not chatbots!
The really great jobs I’ve had didn’t really require interviews at all, they knew who I was and knew what I’ve done and what I could do, and that was good enough.
> Why don't they ask questions relating to the job?
When I hear of this sort of behaviour in interviews, I have to assume that the culture of the dept is such that x% of your job will entail fielding issues that are things that you consider outside your 'job'. If 20% of the time you'll be talking about stuff that you consider tangential, and you don't react well to that, you may self-select out. ?
But it’s not anymore. For all we know it was just a single early downvote, but your comment criticised the whole community. The guidelines ask us not to make comments about downvoting, as they make for boring, shitty discussions like this, and because a downvoted state on a comment is often temporary, whereas a discussion about it is permanent.
See comment from gedy elsewhere in the thread. Multiple people have pointed out these kinds of questions being asked well past 2006.
Since there is no incentive for a large number of engineers to cook up stories about their interview experience at Google, I am calling this smoke as having some fire behind it.
But what's the incentive for a large number of Google engineers to surreptitiously ask questions of a style that's explicitly banned?
I think there is incentive for failed interviewees to cook up stories - or at least to tell something very one-sided - because none of us like to believe we failed at something and feel better if we persuade ourselves it was stacked against us.
The most persistent critique I have heard about Google is that the experience can be very uneven in everything from recruitment and workload to even salary for the same job. Even if Google entirely stopped with brain teasers, people certainly seemed to have expected them on account of unpredictability up until a few years ago.
The incentive is just not having to come up with and calibrate new questions. It takes a lot of effort to do that, and if you've been asking a "banned" question for a while and are comfortable with it, it's easy to just keep using it until someone explicitly tells you not to (reverse feedback from hiring committee to interviewer wasn't a thing for a while). Most engineers don't really enjoy interviewing and want to put minimal effort into it.
And in fact there's no need for explicit animosity by the failed interviewee. Someone who misunderstands a question badly enough to call it a brainteaser when it isn't is also likely to have performed badly on it.
I've certainly met people who believed that anything other than (for example) an android ui question with open book stack exchange was a dumb brainteaser with no relevance to the actual job...
For my part, about every few months I hit something that would make a great interview question, and then have a wonderful afternoon coding it up... Ymmv.
I have a much higher faith in the capacity of the Internet to produce liars, trolls and delusionals than you apparently have.
This is an open internet forum for anyone to type anything they feel like in. Of course, as far as you know, I might be an Uzbeki 12-year old, so that argument only goes so far...
I can imagine that Google interviews for other roles than Software Engineers can have any kind of wacky questions. But mostly these stories sound so much like the stories that used to go around about Microsoft interviewing, only with the company name switched, that I have to think it has to do with their inherent virality somehow.
To me OP seems to assume a competent, unbiased, and well meaning interviewer: always giving hints, knows exactly what they want, will field and answer questions comprehensively. At Google's scale with hundreds of thousands of interviews on the books, thousands of interviewers who like all human beings have good days, sick days, angry days, and depressed days, this will not always happen. What happens when its the interviewer trying to have a "smartness" contest? There are too many testimonials of this happening at Google to just ignore it.
What happens is the hiring committee will disregard the interview report, and send feedback to the interviewer, saying "don't do that". (There are generally 4-6 interviewers on an interview panel, so if one interviewer does a bad job, there are other interviewers who will be providing signal to the hiring committee. A single bad interview report won't sink a candidate.)
Google's interview training is pretty specific about what is expected of an interviewer, and the interviewer has to write up fairly comprehensive reports about the questions that were asked, and how the candidate answers them, what hints were given, etc. Most interviewers ask questions that they have asked multiple times before, so they will include things like, "The candidate (TC) took twice as long to code this a very basic warmup question that was designed to motivate a more in-depth distributed computing question; unfortuantely, what a strong candidate could do in ten minutes, TC couldn't get working C code in 45 minutes". (Current best practice is to try to avoid revealing the gender of the candidate in the interview notes, hence the use of TC instead of he or she. The goal is to try to avoid triggering any unconscious bias on the part of the members of the hiring committee.)
Now, there are some screening questions that are asked by recruiters as part of an initial phone screen when they are hiring for SRE candidates. Those questions are multiple choice and are really basic questions designed to screen out old-school operators who are quite good at mounting tapes (for example) but don't have any understanding of what TCP might be, and who think an SRE job is no different from a traditional system administrator position. Those are not asked by an engineer, but by a recruiter; the goal is to avoid wasting everybody's time.
Asking a question well takes skill. It is possible that an interviewer screwed up asking the question rather than that the candidate screwed up.
Most interview systems don't have a systematic, ongoing way for accounting for this especially given that interviewers are self reporting.
In my opinion, most interviewers are undertrained/underskilled at asking specific questions and at interviewing in general. They don't invest the resources to systematically improve but rather treat interviewing as a burden to be minimized.
There is still a lot play in systems that allow for borderline smart ass behavior. Do you think an interviewer will write up that they sneered at the candidate?
The interview process is a factory. The company is mostly concerned with reaching outputs. The candidate is just a happenstance casualty.
It’s almost as if interviewers need to be tested as well. An interviewing contest where a faux candidate rates interviewers and suggests area for improvement.
Of course, that would be rediculously expensive, so at best some canned training is used instead. Still, it might make sense for higher value teams.
After interviewers are trained, they will perform at least two "shadow" interviews where they tag along an experienced interviewer and they want the interviewer conduct the interview and then are asked to write up an interview report. After they finish writing the report, they can see what the experienced interviewer wrote up, so they can understand how the writeup should be done.
Afterwards, the new interviewer has to do a non-trivial number of interviews before they are considered "calibrated". When an interviewer is uncalibrated, their score won't be given much weight, and the hiring committee can see how their interview reports and scores compare against more experienced interviewers. This also gives an opportunity for the hiring committee to send interview feedback (e.g., you're asking a banned question; the coding question is too simple, so it's not providing enough of a useful signal; don't ask "trick questions" which again don't provide much useful signal whether or not the candidate can answer it correctly, etc.)
So there is certainly ways in which interviewers do get suggestions for improvement. And it doesn't have to be _that_ expensive. It's just a matter of making sure you don't have more than one uncalibrated interviewer per panel.
It won't be possible to extract any useful information from such interviews. But I guess it's possible to convince employees to haze candidates this way. You should feel bad for doing this to people though.
>It won't be possible to extract any useful information from such interviews. But I guess it's possible to convince employees to haze candidates this way. You should feel bad for doing this to people though.
Empirically this isn't the case. It would be much more interesting if you took the time to elaborate on what about this process is cultish, why it won't provide useful signal. Without that, it just comes across as a mean-spirited complaint.
At Google, responses are calibrated against other interviews that interviewer has done. If they have some consistent skew then this can be adjusted for. If the interviewer sucks and the committee can tell then they can give that score less weight.
I'm not sure what you are asking. Each interviewer on the interview panel writes up a large amount of notes on their interview. The C++/Java/C/Python code written by the interviewee, what hints were given, what blind alleys the candidate might have wandered by, how the candidate tested the code, how long did it take for candidate to find a bug (with or without hints, etc.)
For a design question, the interviewer will write up a sketch of the design, what tradeoffs were identified by the interviewee; what hints, if any, were needed, etc.
Then the interviewer will rate candidate on various technical dimensions (coding efficiency, design, etc.) and non-technical dimensions (communication, leadership, etc.) For each of these ratings the interviewer has to justify the rating, by pointing at examples from the interview notes.
Finally, the interviewer will be asked to score the candidate along a dimension of "strong hire" to "strong no-hire" and again, the score must be justified with a paragraph. For people on the hiring committee, the justification for the scores are often far more important than the actual rating given by the interviewer.
The hire/no-hire decision is not up to the interviewers; the hiring committee is composed of a different panel of engineers who review the interview reports from the interview panel; and the members of the interview panel write their interview reports without getting to see or hear from the other members of the interview panel.
Every interviewer provides a lengthy write-up of each interview: several pages of notes, including the raw notes of what the candidate said, questions asked, plus an evaluation against a rubric for a number of criteria.
The system is not perfect (what system is?) but as someone who has conducted tens of interviews as a Googler and at other tech companies, I can say it's one of the most rigorous and fairest systems I've seen so far. Interviewers are trained to try and get candidates to a 'win', trained to be aware of their unconscious biases, and the hiring committee process reduces the significance of any one vote. In general, the process is designed to select the best candidates and give all candidates a positive experience.
It's not perfect -- indeed, I'm sure I've given 'failing-grade' interviews as an interviewer here and there -- but it's one of the least-worst human systems that I've encountered, from both sides of the process.
Can you imagine how much effort you'd need to fabricate a convincing account of you asking reasonable questions, after you assaulted the candidate with ego-tripping trivia questions...
Besides being wrong and pathetic, why would anybody want to do that? The goal of maybe 90% of Google engineers is to spend as little time as the company allows on interviewing and writing feedbacks, and instead spend time on actually working on their projects.
I agree 100%, there are too many testimonials about people asking about things like linux syscalls and the different bits in TCP headers for me to believe that all Google interviews are conducted without any tech pissing-contest shenanigans.
> If that were true then why not simulate those situations
Because when you try to simulate those situations in an interview that needs to be conducted in 45 - 60 minutes, the questions become the kind you hate.
I regularly interview candidates; and some find my questions weird. The point is that I'm trying to simulate everything in such a short time. Otherwise, each interview would take 16-32 hours, because we'd have to go through a massive onboarding process just to ask basic questions.
>If that were true then why not simulate those situations rather than riddles, google-able CS trivia, or whatever the interview flavor of the month is?
I hate this meme so much. I personally have seen situations in my own work several times in the last few months where knowledge of algorithms, even on the leetcode medium level, was immensely helpful in finishing the job. If you can do it quicker and more fluently, you're probably going to be more productive.
I remember that there are test patterns built into T1 CSU/DSUs, but not what they are or how to turn them on -- if I need to know, I'll look it up.
I remember that there are three QoS bits in the IPv4 header, but not where they are. Probably pretty early, because of hardware implementations.
I remember that lots of people look down on Perl 5's object system, but not why. I remember the existence and purpose of lots of Perl modules, but not the interfaces.
I edit JSON and YAML every few weeks, but I don't have a conscious recollection of the rules -- they're prompted by looking at what's already there.
I can guesstimate Big-O notation on most chunks of code, but I'll be fooled when there's a function that's hidden in a library because I generally don't have those memorized.
I can tell you the bandwidth of lots of hardware interfaces, and the relative efficiency of speed and efficiency of storage for a handful of RAID configurations -- the ones that I set up, and the ones that I avoid.
I know one firewall configuration tool reasonably well, which means that I look up esoteric bits, and have used so many that I expect to do common things in all of them with a quick examination of the language.
I currently know a fair amount about GDPR and why it doesn't apply to my company (and how we can assist customers with their GDPR requirements) and lots about the Massachusetts and Virginia data privacy law. Very little about PCI compliance, but my point is: if my company wanted to do card transactions, I can figure out what I will need to learn to write a good policy and get it implemented in a way that won't embarass anyone.
In short: when the job is the same thing over and over again, you memorize the details. When it's always something new, you need a broad overview about what can be done and where to find the details.
> As compared to say "are you going to have kids - to a woman"
That's not only disrespectful and unprofessional but potentially also illegal. But it isn't a contest to see who can be the most disrespectful or the most disrespected, so I don't follow your point.
Anyone who interviews at a "prestigious" company and then complains about how difficult the interview is kind of hypocritical. Google, and other FAANGs/unicorns, make you solve hard algorithms questions because they believe - correctly or wrongly - that in order to succeed as a company they have to filter out the vast majority of candidates who have poor algorithmic skills. They also pay a lot of money because that's the only way to attract enough candidates who can pass their hiring bar. If they stopped asking hard interview questions and increased their candidate acceptance rate they wouldn't need to pay people 300k/year to fill their open positions.
But the only reason people apply to Google and Facebook and Netflix in the first place is because they pay a lot of money. There are plenty of crappy CRUD shops that won't ask you to enumerate palindromic primes. As long as you can do Fizzbuzz they'll hire you and pay you 80k/year to glue libraries together. But people still try to interview at Google instead because they want to make 300k/year and not 80k/year. You can't have your cake and eat it.
We’re not complaining that they’re difficult, we’re complaining that they’re stupid and a poor measure. This isn’t jealousy: I was hired by Google twice, but comparing my interviews and outcomes with friends and later coworkers, it seems like I just got luckier twice, and they dropped some way more talented people on the floor. The process is a total crapshoot, but like many things at the company nobody senior enough is willing to take ownership and fix it.
I struggle to find evidence that this is any better anywhere else. At almost every other company I've worked at, the interview process was more about the referral itself (which leads to some pretty awful hires), or whether you can fake it til you make it.
Talent is not a single measure at Google. There are multiple facets to whether Google believes a candidate is solid. Strong technical talent is not an indicator of success, rather just one aspect of it that's taken into consideration by the hiring committees. So yeah, Google will say no to incredibly talented people because they fall short in other areas.
Crapshoot is table stakes practically everywhere you go. At least Google makes an attempt at making things objective and holistic.
It's not really a crapshoot: It's a process with relatively few false positives and a fair number of false negatives. You and your "more-talented" friends are likely all above bar by some perspective on it, and the process let a subset of those through. It's imperfect (the number of false negatives is obviously higher than desirable), but that doesn't mean it's random.
> They also pay a lot of money because that's the only way to attract enough candidates who can pass their hiring bar. If they stopped asking hard interview questions and increased their candidate acceptance rate they wouldn't need to pay people 300k/year to fill their open positions.
I have 16 years of total experience. Their initial offer to me indicated they thought I would take $231k/year for the privilege of working there. I got them up to $253k/year. Both numbers are less than I make now (though the latter is close).
As far as I am concerned, and based on my direct experience, Google pay is not the hit shit everyone claims it is. Maybe it would be if I played the competing offer game, but I shouldn't have to do that.
Maybe you just got a low offer and didn't perform to the standard they expected. Sometimes the companies give a lower offer because you didn't perform at the level they were expecting. (e.g. Performing at senior instead of staff)
Yeah, from what I've seen, Google seems to coast by on brand recognition more, and isn't usually as competetive in pay as Facebook or Netflix or the big decacorns unless you negotiate hard with lots of competing offers.
To be fair the people who complain about this stuff probably don't think they pay this much either - I've seen a strong co-occurrence of salary denial in the same population!
FAANG companies don't pay anything near 300k USD annually in Europe, it doesn't happen. Probably 20% above the local market average. Their (certainly Google's) recruitment process in Europe is still the same demanding though. These are yet another companies in one's job application queue, and annoying ones.
Developer salaries are notoriously low in Europe and I’m not even camparing crazy SV or NYC salaries. I worked with devs in Europe working on the same platform and domain as me for half the salary. I haven’t heard of many low 6 figure developer jobs in and around the EU.
I sort of agree with you regarding the salary. The thing is, there are now a whole bunch of companies in the valley paying total comp close to or above 300k. You don't need to go to a FANG to get that. What you get at a FANG is name recognition and a chance to work on possibly more impactful or well known software. You also get the chance to deal with overachievers who think the only path to success is to work yourself to death for a FANG.
> are plenty of crappy CRUD shops that won't ask you to enumerate palindromic primes.
Nope, every shop has convinced themselves they are changing the world, and do ask these questions, often on codepad.io. Even greeting card companies, hah.
I do contract work and interview often. Not one single place in the last two years hasn't tried these highly inefficient tactics on me.
I cant believe this flavor of an answer has to be rewritten every single time FAANG salaries are discussed here. Reading the replies to this thread, lot of bitter people. I myself did not feel like relearning all my undergrad CS problems, so I simply never applied to FAANG engineering job, that simple.
They do it this way because they can, they have to weed out 99% of candidates, they are hiring for software engineers, it makes sens to weed out those who do not know perfectly core/advanced CS problems/solutions.
The problem is the rest of the industry looks to these companies for best practices and then proceed to cargo-cult them. If you think they can be avoided by applying to a greeting card or healthcare company instead, well I've got news for you.
I'm going to provide no justification; but just say that that sounds very "drink the cool-aid."
If you have such a high hiring bar — and you still get a sexist memo, or vocalised hate-crimes on your platform, or salary suppression, or workers being disallowed from using the bathrooms, or 20,000 employees staging a walk-out because of bonuses awarded to perpetrators of sexual assault... Maybe your a hiring bar is lower than it should be.
As far as I can tell, everything you just mentioned is totally orthogonal to coding/algorithms intelligence, so it’s not clear what point you’re making.
I don't really agree with this position. I have friends at Big4, a lot in Facebook. I wouldn't say they are the type to know what palindromic primes are. When I interviewed there I didn't get any questions that required decent algorithmic skills.
> filter out the vast majority of candidates who have poor algorithmic skills
No. Google, FB, interview this way because you don't have the skills on their stack, and this is the common denominator. (Source: I'm an engineer/manager at multiple FAANGs for the last decade.)
I have recently interviewed at most of these companies, eventually managed to get some good offers at some of them, but let's be honest, I had to invest months in getting prep for the tech screening, whiteboard coding exercise and the whole non-sense jazz. I am sick of reading this blog posts because everyone knows, that is not how the majority of these interviews are conducted.
I have more than 10 years of experience on the field, I earned my degree in computer science years ago, but I had to go back and brush up on trie, tree, etc to convince the interviewers that I was worth working at company XYZ. The interview process is pretty much broken and very much biased towards fresh out of college eng. Unfortunately, if you really want to work at any of these companies you need to play their game and make them happy. Once you get the job, you will discover that most of your teammates are not as smart as they want you to believe they are and often you will be wondering how the hell did they manage to get a job at company XYZ. Well, they simply invested months prepping for the interview, plus they come from some well known university. Unfortunately when it comes to real work, they have no idea on how to get things done. What make things even worse is that often the tech screening is done by junior eng that have no idea and experience on how to conduct an interview and they expect the answers by the book.
A positive note: I also noticed that some companies are now giving take home exercise that are much closer to the day-to-day job. So maybe there is still hope.
As much as I agree that the process is bullshit, it is no different than studying for an exam, something we took for granted during school, just as a mean to an end (a degree), except that in this case there is arguably a much higher ratio of payoff/effort.
There is no mystery involved, one of the more known books on the subject is "Cracking the Coding Interview", but really there is no mystery to crack.
There is a ton of literature, tools, examples freely available on the Internet, all that is missing is time and effort to go through them and learn them as if it were for an exam.
One of the main objection is "Why I should spend more time in doing something I already do full-time at my job, where I'm perfectly qualified?", but to me it's pointless: if you think getting hired by another company will improve your life significantly (or even marginally), it is something you should definitely put effort into.
At the very least, this process (aside from those lucking out) proves that the candidate is able to understand a non-trivial problem (getting hired) and have the ability and put the effort necessary to solve it (going through the bullshit excercise and questions), as it was in your case.
Google believes, as so many before them have, that the ideal candidate can be found through numbers: pure, unbiased, beautiful numbers. Because once you reach a certain size, your biggest threat is no longer your competitors, but rather your regulators, and that means that you must not expose yourself to regulatory (at the core, social) risk.
Bias is bad. Discrimination is bad. And if you're big AND bad, you get fined, and have obstacles put in your path.
So what does a rational BigCorp do? They make everything "fair". So everyone has the same opportunities, everyone is colorblind about cultural fairness-obsessions x, y, and z, and the hiring process becomes as effective as cardboard cake. Tick the boxes and you're in. Otherwise, there's the door.
Of course, the irony of all of this is that they end up discriminating against the very people they need: The different, the strange, the quirky, the innovative - everyone who doesn't perfectly fit the criteria of a safe hire where nobody can criticize the hiring decision (and impact your promotion prospects).
This is the social process by which BigCorp stagnation works, and it's always how it worked.
Attack the idea, not the "type of mind" that posted it.
My point of disagreement would be that BigCos making "safe" hiring choices and having overbearing, one-size fits all HR policies is a phenomenon that predates modern political correctness. Having a policy of making interviews into arbitrary objective tests isn't just to prevent twitter mobs and hold off political rhetoric, it's a way of making hiring more predictable, combat nepotism, and select for people who aren't too individualistic to succeed in the corporate environment. The downside, of course, is you throw away and turn off a lot of good candidates who don't fit the mold.
I think BigCorp prefers to bring on the strange/quirky talent through acquisitions and acquihires, rather than through the front door. That way, strange though the individuals may seem, their track record is proven.
Of course, the irony of all of this is that they end up discriminating against the very people they need: The different, the strange, the quirky, the innovative - everyone who doesn't perfectly fit the criteria of a safe hire where nobody can criticize the hiring decision (and impact your promotion prospects).
I work for Google as a software engineer and now also engineering manager.
There is no plausible mechanism by which a decision that anyone makes on whether to hire a candidate could affect that person's promotion prospects.
Interviewers don't make hiring decisions directly, they merely individually make recommendations to a committee that reviews these recommendations (along with other info like the candidate's resume). I have never heard of hiring committee "reporting" someone for recommending they hire someone "weird" or "bad" -- it would be kind of absurd. If you don't like the recommendation, don't take it! Similarly, hiring committee members are not going to be "written up" for making "weird", or "bad" recommendations. There isn't even a group of people who could plausibly do this.
There is plenty to criticize about how Google hires. But I disagree with what I read as the thesis of your post that we have to choose between a biased process and an effective one. Indeed the problem you identify is what I would describe as bias against people who don't fit a particular mold, people who are unusual or "weird", and so really what I think you're saying is that in trying to fight e.g. gender bias, we have introduced new biases.
I also don't think that's true. The counternarriative I'd propose is, we have always been biased against what we consider to be "weird" people, that's human nature. What we've been trying to do is change people's perception of what is weird, so that e.g. female or black coders aren't. That's what unbiasing is about.
The Goal of BigCorp, at any point in which they've been crowned the market leader (or sometimes monopolist) is to maintain the course. They become risk averse because you're less likely to get fired for making the same amount of money as last year than you are for making less. Yes, that might limit you from making MORE money than last year, but that's just how the human mind works.
There's several logical fallacies that wind up happening in these sort of processes (confirmation bias, survivor bias, etc) but the people making the decisions are fine with making them as long as things are working as expected. Eventually entropy takes over in all systems, including human social systems, and things break down irreparably. There's no stopping this in BigCorp, because that is their nature. There's no stopping this at Small Co. either, because in that system there's so few people that a single hiring mistake affects such a large % of the total work force.
TL;dr- All human systems involve humans, and that is best point of entry for problems.
Yet another "trust the system" message from the authority figure who enables the system. Similar:
- Police officer: Just follow our instructions, be cooperative.
- Car salesman: Just be upfront with what you want. Tell us about yourself, and we'll earnestly try to help you.
The message is the same; the authority "just wants to help," but in reality the relationship is adversarial to a larger degree than it is cooperative. On Blind, you know the real way to getting through the Google interview is LeetCoding like hell and not admitting you've seen the questions before.
Some perspective from the interviewer's side may help here:
- A Google interviewer's (and I would assume any interviewer's) primary goal is to come out of the interview with enough confidence to give a positive or negative score. If they sit down to write feedback and have to give a neutral score, the interview wasn't productive. This means that the interviewer is just as eager to find evidence for a positive score as a negative one -- there isn't an incentive to "getcha" with cheap or tricky questions.
- Doing interviews at Google is volunteer work. You are not interacting with a professional interviewer, you're interacting with someone whose day job is being an engineer. They don't have an evil agenda; they are doing this because they want to help Google hire the best candidates, and by inference make sure their future coworkers are good people to work with.
- Interviewers overwhelmingly _want_ their candidates to succeed. It's a true joy when I have a candidate who glides through a question (or finds a solution that was even better than mine). When candidates struggle, it's not a pleasant experience for the interviewer either.
- In the end, the point of technical interviews is to avoid the terrible experience that is working with an incompetent or uncooperative teammate. Interviewers are trying to find people that (a) can work well with others and (b) can get the work done.
- The system is _highly_ prejudiced towards suppressing false positives. This is the right decision, but it comes at the cost of a high rate of false negatives. Were myself or any of my colleagues to re-interview for our jobs, I would expect about a 60% hire rate. This is not even taking into account the constant ebb and flow of hiring demand. Sometimes there just isn't any headcount. And sometimes you just happen to get questions that you don't click with. This is also the reason that recruiters are so eager to bring you back to interview 6 months later.
- Recruiters and interviewers have very different incentives. Recruiters want to maximize the number of people they get hired; interviewers want to hire people they want to work with. This can lead to behavior that seems schizophrenic from the outside: the recruitment side of the pipeline constantly pestering people to interview, but once the candidate enters the interviewing pipeline the process is slow, deliberate, and careful.
>The system is _highly_ prejudiced towards suppressing false positives. This is the right decision,
This is textbook Google propaganda that has been repeated at least since I last worked there 5ish years ago. It's bullshit though because the ratio of competent to incompetent engineers was the same as at FB, MSFT and NFLX (with the latter tending to prune the fastest).
Just because you generate a system that spits out a lot of false negatives, it doesn't mean it has done anything to reduce false positives. This should be immediately obvious given that the relationship between the questions asked in G interviews and actual software engineering is non existent.
Don't repeat the trope that Google's hiring system is actually better at eliminating false positives. There is no evidence of it and if it truly was better, everyone would adopt it in a heartbeat and we wouldn't be working with bad engineers who spent a few months on leetcode to get into jobs way over their heads.
The reason Googlers never care to critically question the sorting hat is because it picked them.
"The reason Googlers never care to critically question the sorting hat is because it picked them."
I think this is a truism about the quality of most organizations - people who thrive in a specific organization coalesce into the organization and enhance those qualities within the organization that are specific to them.
I presume the fact that Google uses non-professional recruiters makes the recruitment process more about cultural alignment than it absolutely needs to be to gauge the capability to add value in a software engineering process.
All the FAANGs operate under a very similar hiring model, because they all get far more applicants than they can hire and can afford to have a high rate of false negatives. "Everybody" can't adopt it even if they wanted to, because your average business doesn't get a million applications a year.
That isn't really what the comment you're responding to was about. That comment was specifically about the fact that Google purports to target eliminating false positives and the trade off is accepting a high rate of false negatives. In fact there is not necessarily a relationship between the two: or at least not one that Google's process measures.
Oh, but they can and do. The difference between 100 and 1000 resumes is not material—both too many to look at. What I've found is those in the second bracket simply throw out those without a degree and then cargo-cult common practice.
> There is no evidence of it and if it truly was better
Year after year the Googlegeist survey finds that one of the things Googlers most enjoy about working at Google is their fellow employees
> if it truly was better, everyone would adopt it
Google has an abundance of money and an abundance of applicants who would like to work there. Companies with fewer applicants per position or lower salaries relative to the industry average may need to be more open to false positives if they want to be able to hire anyone at all. Smaller companies also have the advantage that they can usually fire people more easily than larger companies, which helps lower the cost of false positives for them.
>Year after year the Googlegeist survey finds that one of the things Googlers most enjoy about working at Google is their fellow employees
Hiring has very little to do with that. Perf review, feedback mechanisms, and work environment are orders of magnitude more critical to that. I've worked two startups with completely different hiring processes from Google and the other employees were amazing to work with there too. The key is feedback to correct issues and a quick PIP/fire process for folks not cutting it.
>This is textbook Google propaganda that has been repeated at least since I last worked there 5ish years ago. It's bullshit though because the ratio of competent to incompetent engineers was the same as at FB, MSFT and NFLX (with the latter tending to prune the fastest).
But those companies use hiring practices that approximate, to a high degree, what Google does. Facebook and Netflix certainly do, and Microsoft has a high enough rate of bad hires that you are required to reinterview to switch teams, so that high performing teams can keep reject bad candidates who are already at Microsoft.
I don't work for Google, so I don't know how much this applies to you folks. In my experience, while what you are describing is ideally correct, it's often just that -- an ideal.
For example, you'll have people insisting (and consciously agreeing) that their objective is to come out of the interview with enough evidence to give a positive or negative score. In the back of their minds, though, they will often be projecting their own insecurities -- about their expertise, about their career, about their job, about their team. Halfway through the interview, things end up being about something else altogether, like interviewers trying to reassure themselves that they're better than who they're interviewing (it's especially hard not to fall into this if it's been years since you last had to implement a red-black tree and you're interviewing a fresh graduate who dreams this stuff in their sleep).
It's very hard to get past these things. I struggle with them every time I interview someone, and it's very hard to know when to chalk it up to "the system" and when to chalk it up to your own baggage. Pretending that it's only the former only perpetuates this stuff -- and empowers the ones who actively enjoy abusing candidates and making them feel like crap just for the heck of it. Which is very common everywhere -- including, from what I've heard among my peers, at Google.
So far, the most relevant compass I've found for these things is made out of two questions:
1. If I were a candidate, and I'd have gone through this interview, how would I feel about it?
2. If I were to go through this exact interview today, would I still get hired?
If the answer to #1 isn't too good, there's probably some individual-level things you can change, but if the answer to #2 is bad, the problems tend to be more systemic in nature.
> The system is _highly_ prejudiced towards suppressing false positives
I wonder if the algorithm centric interview style at Google can really achieve that. From my experience, algorithm centric questions + plus white board coding have bias towards academic people. (Maybe that’s fine for google) However, the way to crack that kind of interview is really just practice like hell on leetcode. Just take a minute and think, who are most motivated in doing that? Good, experienced engineers have no trouble finding jobs in Bay Area, and why would they waste their time on leetcode for skills that are mostly going to be useless in real work? New grads and engineers that have trouble finding jobs are most likely to spend hell of their time on leetcode. I think the interview style at Google is in fact increasing false positives instead of suppressing it. It also has high false negative for sure.
There are tons of other ways of doing interviews, in which interviewer gets a lot more and very relevant signals from candidates while keeping candidates pressure low, and not wasting their time, but Google is not doing it, like asking practical questions, letting candidates write and test their code in their own computers, has a debug session, etc.
I'm a UX Engineer at Google. The tasks we ask you to complete in an interview are very practical - sketching out the same kinds UIs that you might build in the real world.
The impractical part is that you'll probably be coding in a Google Doc rather than a text editor. It can be a bit disorienting.
I've also interviewed at Airbnb, where I was asked to code in Codepen for UI and in node for algorithms. I felt more comfortable in a more realistic coding environment, but one of their computers crashed mid-interview and the other's network access was broken. Coding in a doc and hand-waving when necessary is better than working in a more realistic environment if the hardware isn't reliable. (Realistic doesn't nec. mean unstable, but if you're expecting candidates to write working code in a fixed period of time, you need to make sure your communal interview machines are well-maintained.)
Do you (and the company) try to find new ways to interviews such that e.g. mid-career candidates don't need to practice in advance?
The current interview process at Google and probably at other companies as well seems to be frozen in time - one needs to be a fresh grad or prepare weeks/months in advance or be "into" competitive/sports programming, which in most cases has nothing to do with the daily job.
So no plans to have a fresh look on this?
Or maybe you keep these practices (and other companies follow) so that it is harder for engineers to change jobs easily?
I believe it is inappropriate to ask engineer to prepare in advance for the interviews.
Last time I got contacted by their recruiter and sent links to coding websites - I replied “Great for someone just out of college”.
Google, you are boring company with insane interview process. When I worked there 8 years ago I met many people, who thought passing the interview made them better than other. I regret I didn’t tell them that they should check their heads.
I got interviewed by Google twice, started by direct invitation from their HR team as I never applied to Google, naturally I bombed both times as I am not the PhD kind of developer they are after.
In every single time their recruiters were telling me our wonderful my CV was and they wanted definitely to have me there, naturally with a selection role totally unrelated for the kind of positions that I was applying for.
The third time I got a direct invitation from their HR team, I made it clear I wasn't interested if it was going to be again the same old way. Never got contacted again.
Recruiter's job is to get you there at any costs. I've being told before that cool team A,B and C wants to talk to me just to find that some boring team D is interviewing me.
This means that the interviewer is just as eager to find evidence for a positive score as a negative one -- there isn't an incentive to "getcha" with cheap or tricky questions.
This is only true if the interviewer is uninterested in what happens after the hiring process. If they want to make sure they're winning a reputation doing great interviews for more good hires than bad hires then there's an incentive to be cautious, and that caution could well manifest as trying to catch out anyone who might be 'gaming' the interview process. Those false positives reflect badly on the interviewer; the false negatives don't because they might have been real negatives.
Such a feedback loop -- of identifying which interviewers give the "most accurate" scores -- doesn't exist. Nor is it clear how you would build such a system (how do you quantify a "bad hire" or "good hire" in such a way that isn't lost in the noise?). Interviewers are trusted to do the best job they can.
Remember, interviewing is volunteer work, not something that will advance your career. The results of the interviews and committee deliberation are confidential, so there's no way to gain a reputation for being a "great interviewer".
Ah, but you are woefully naive if you think some interviewers don't slip in who enjoy having people struggle with problems so they can stroke their own ego. There are also the ones that have seen the quality of engineers significantly decline over the last 6 years of massive expansion and just want to gatekeep.
One of the major flaws in Google's process is assuming that the engineers are incentivized to find good hires.
Yes and No. You want to build your reputation as a good interviewer, but that doesn't only mean you are tough and let only amazing candidates pass.
That also means that you are usually aligned with the interview committee, and if you constantly are a NO when 90% of the committee is a YES and the candidate ends up being hired, then you'll end up building this reputation of being too tough or just not getting the right signals.
I've done 300+ interviews at Uber where the process is somewhat similar to Google, and OP's points are true. As an interviewer all you really want is get good signals either good or bad. And yes, an interview is much nicer when the candidate is doing great.
It doesn't work that way. At Google, the only people who can see your ratings are the people directly involved in that candidate's hiring process. As a hiring manager, you do learn which of your reports/fellow interviewers take a tough line when scoring, but that doesn't make them great interviewers and doesn't factor at all at performance review time.
Source: I work & hire at Google. Opinions are my own.
I don't think that interviews really generate reputation in a large company... First, multiple people interview each candidate, so credit/blame is always distributed. Second, and more important, ain't nobody got time for that.
I agree that everyone has good intentions, but at the end of the day these interviews can be gamed really easily.
Before interviewing at Google I spent ~3 weeks doing leetcode style problems on a whiteboard I bought just for this purpose. Did not make me any better as a SWE, but definitely helped me clear my interviews. Without the practice I would have failed my interviews.
Having said that, I don't think I have any better alternatives; any interview process is ultimately going to be game-able in some manner.
I was confused about interviews in big companies before. Especially sometimes you have interviewers from not the hiring team. I was also confused about how leetcode questions are used, and sometimes feel that justification on hiring or not hiring are not based on evidence. Then one day I changed my view. I am seeing interviews a way to evaluate candidates' wanting to a job, and his effort of trying to achieve something. If he can invest in time in leetcode, he definitely can learn and do well in any tasks. We are humans, and we can improve. So the interview does its job. Nothing is perfect of course.
The issue with Leetcode and HackerRank as I see it, is that you're reproving to various companies each time and each interview that you know how to code. However, the only alternative to this seems be an SAT for programmers, which isn't better.
To be honest, coding is not difficult. Also many jobs do not require 'that' much knowledge of low-level informations anyway. Especially nowadays, for most positions, you don't need to know re-ordering lines of code can affect the cache, you use hash whenever it is possible and do not need to implement your own data structure. If you do need something in your work and you currently don't know, google and reading will definitely teach you. Programming is not a special power only few people can have. Actually, if you are consistently learning, you will probably do well in any tasks. If people are not willing to put some time into getting the job they want, maybe they don't want the job enough.
The engineers doing the interviews certainly have no financial incentives either way... Recruiters make contact and did a preliminary "do you have a pulse" conversation, and then shepherd the process, but don't have input into the decisions along the way. Their incentive is thus to find great candidates who they think will make it through the process.
I think that depends if you're talking about internal or external. At least everywhere I've worked our in house recruiters don't get paid a bonus when they hire someone, it just their job. If you're working with a recruiter that doesn't work for the company hiring you that person may have a more direct financial stake in your employment.
FWIW, I went through the interview process and I actually really enjoyed it.
My attitude going in was that I was evaluating what it would be like to work there, and I treated the interviewer as a coworker solving a technical problem, where I was taking the lead.
The questions I got were pretty much exactly the sort of hing you see on LeetCode, but I hadn't seen any of the questions before. I know from studying for other technical exams that once you reach a certain number of practice problems, pattern recognition takes you a long way.
I had a similar experience at Facebook. Everybody was super nice, they all wanted you to do well, and they all responded well to being enthusiastic and genuinely interested in the problems you're solving.
> Everybody was super nice, they all wanted you to do well
I had the same experience. Afterwords I thought I did pretty well because I got to a solution to all the problems eventually.
Later when I discussed not getting the offer when a friend who works at Google. He said they try to guide the interviewee to a solution before the time limit. Which is reasonable, you want the interviewee to feel more confident and not get tilted by messing up just one question. However in hindsight the interview is a lot less enjoyable. You can't show any weakness solving a problem, less they offer you a "helpful hint" and record a black mark against you. The interviewer is there to smile and act like your colleague solving a problem while all the while making a list of why your a bad problem solver.
I'm not that bitter though. The interviewers that asked me questions actually related to the position I was interviewing (embedded software) did actually feel fun and collaborative. I accept that if I want to interview well at companies that employ these types of interview processes I will need to do every problem in CLRS (or at least more than zero).
I haven't been on the interviewing side, so I can't really comment on some of this.
But the impression I got as the interviewee was that every question ramped up in difficulty. The slope of the ramp-up depended on the question. So maybe in your case, you didn't get as far on the difficulty ladder as other people that were interviewed.
I was pretty shameless in asking for hints when I needed them, but it was typically of the form "I see this tradeoff here, and my gut says to do X, is that reasonable?" I'm sure some of that counted against me, but I was blunt about what I needed to make a decision and solve the problem, and I think overall that reflected well on my ability to problem solve on a team.
I would definitely not recommend doing every problem in CLRS. I would recommend doing just about every problem in Cracking the Code Interview (because they have nice answers), and then getting into a rhythm of doing LeetCode problems every day until the interview.
"I see this tradeoff here, and my gut says to do X, is that reasonable?"
YMMV with interviewer, but personally, this is the type of thing that gets positive feedback from me - someone who understands and explains the tradeoffs involved.
Also, fair warning: LC has its uses, but it is not the same as an interview.
Yes, ramping up in difficulty is certainly a thing.
In both my experience taking a Google interview and conducting a lot of interviews at my own company, I agree that a good question is open ended. For example "write memcpy" is trivial to get sometime working but can lead down many paths when discussing performance and computer architecture.
What gave me trouble, besides being rusty with algorithms, were some trade offs I would never make in the domain I'm familiar with. In my day job my microcontroller has 192KB of RAM, so limiting the amount of data collected to fit is important. I never get to the point where I have to worry about my simple algorithm not scaling to GB of data. Another odd idea was doing lots of speculative computation to reduce the latency of a system.
All of these algorithm things can be studied and after failing a couple of interviews you can start to understand what the interviewer is expecting and how to clarify assumptions.
I do question what sort of company you build if you screen for people who can write code on whiteboards. There is no giant mono-whiteboard where all the software engineers check in their code, so the interview process is not related to what people actually do for the job. On the other hand I have interviewed hundreds of people and I don’t feel like my questions do much better with the same one hour slot given to me by HR. The whole situation is inefficient for everyone.
Here’s the thing: patterns you will spot doing Leetcode problems are almost useless for anything else. The only purpose for working enough problems to see those patterns is to have more successful interviews.
I personally think we'd be better off if there was more focus on these puzzles and less focus on 'cultural fit' - but that's just me.
If nothing else, interviewing is an important part of your working life. To get a competitive job, you are probably going to have to practice something. I personally can get a lot more excited about competing on puzzle solving abilities than competing on social skills or social protocols, even if those puzzles might not be used very often once I'm at work.
(I'm not saying algorithms are completely unrelated to the work we do- I'm saying that even if they were, i'd still prefer to be tested on puzzles than to be tested on social skills)
I mean, the subtext here is that we're all getting interviewed for jobs we don't know how to do. All companies have their own infrastructure and their own way of doing things; you can't hire people with experience on that companies codebase unless you want to limit yourself to ex-employees. So all companies are interviewing to try to figure out who could learn how to do the job in question, and that's... pretty difficult to do.
But one interesting side effect of the interview process is that everybody at a place like Google or Facebook or Amazon has a non-trivial understanding of efficiency.
My experience at facebook was being told I could use any language, choosing the language my preferred project was implemented in... and then having the interviewer repeatedly and incorrectly argue with me about syntax trivia.
Not sure about Google but in IBM when I was conducting tech interviews I was looking more for team members to help me complete the project rather than trying to project an authority figure.
The relationship was never adversarial and if you know how desperate we were to get good skills there is not much option to take the high ground.
The poster is correct that open ended questions that reveals the work ethic of team members are much more important than solving algorithms and hard problems. Intelligence is over rated.
My (pre?) interview with Google was little more than trivia questions. Example: the numerical value of SIGTERM. I could have talked about signals and how they were used for 20 minutes, but a nontechnical phone interviewer just wanted trivia answers.
The non-technical recruiter needs these trivia questions because they need questions with single answers that allows them to separate people with no technical knowledge and people with technical knowledge. The easy way to get past this barrier is to get a referral, then you start at the technical interview.
When I was at Google, I worked in a tiny office, and I was friends with the recruiter who worked that office. A couple of times she asked me to vet the answers to the non-technical questions, because if the candidate went off-script in any way, she simply couldn't tell if it's a wrong answer or a more in-depth answer than the script anticipated. This wasn't part of my job, and if our office wasn't so tiny we wouldn't have had any contact, and she wouldn't have had any engineers to ask.
I want to think that’s a terrible question - checking whether they have the signal table memorized is a hilariously bad way of trying to assess someone’s fitness as an engineer.
On the other hand, there’s some signal (sorry) on the nature of the work you’re doing in whether SIGTERM (15) is imprinted in your brain from near-daily appearance on your screen, or not.
I don't think so, unless you are referring to some kind of atypical terminal work. I just tried killing a console program and I see "Terminated: 15". That's not a line I even recognise - it's not common or important enough to bring to my attention.
I would say a vital command-line skill is having a good filter to ignore what's irrelevant.
This is really absurd question to ask. Besides kill -s SIGTERM or even htop does the job. But like other commenters have pointed out the questions really depends on the person as there is no specific decided questions.
I sympathize with the general idea however this specific question isn't very representative of it.
Signals 9 and 15 and much more well known by their respective value than their symbolic name. They are everywhere in program output and log files and if those things are your daily work then you recognize this whether you want to or not.
It's like if you have touched computer code in any language then you probably know that 65 is A. Not because you have useless trivial memorized but just because it figures every time you look at string values for some reason. In one sense it is useless trivia, but on the other hand a programmer who never stumbled on ascii codes must have lived in an unusually insular bubble.
> Signals 9 and 15 and much more well known by their respective value than their symbolic name. They are everywhere in program output and log files
I have never written a program that outputs a signal number to stderr or a log file. And grepping the logs on a server with nearly 800 days of uptime, I can find only five references to signal 15, from months ago.
> It's like if you have touched computer code in any language then you probably know that 65 is A. [..] a programmer who never stumbled on ascii codes must have lived in an unusually insular bubble.
20 years of programming and doing things like making custom fonts. I couldn't remember the ascii code of 'A'.
Recruiters ask horrible questions. If the recruiter has to ask you a technical question, it will be a bad question, because the recruiter doesn't have the expertise to understand the answer.
So you're left with bad questions which have single answers, mostly involving rote memorization.
But the recruiters are told by the SRE team to ask questions like this to candidates before they get a phone interview. I don't see any value in that, but then I don't work as a Google SRE so I don't know what is important to them. It is also possible that they are lacking SRE interviewers so they need to ask stupid questions to get down the amount to reasonable levels.
Another thing that isn't transparent to outsiders is how frequently candidates are shopped around if it becomes obvious they aren't a great fit for the role they were recruited against. I can't tell you how many times I've been asked if I'm interested in candidates that started out as SRE, CE, PM, or other roles. As mentioned, recruiters are paid on hire rate, and it's in their interest to ensure that strong Google candidates are identified and presented to as many potential hiring managers as practical.
Additionally, it's common for interviewers to interview for multiple different groups (as an additional tech interviewer, a xfn interviewer, a Googleyness interviewer, etc), so exposure to job requirements can be reasonably broad at the interviewer level.
As I imagine many suspect, calibrating interviewers is hard, and always has been, for a company hiring roughly 10,000 people every year.
It was indeed an SRE question. Followed by a series of algorithmic complexity questions which I happened to get correct based on barely-informed guessing, having never taken an algorithms class.
Your examples aren’t particularly on point. You should (nearly) always do what police officers tell you because even if they’re wrong, the only thing you can do on the scene is make it worse for yourself, and your car salesman example is simply suggesting that you shouldn’t negotiate.
The fact is that Googles interview process isn’t designed to work for you or I, it’s desiged to work for them. The key factors that influence it are that they get tons of applicants, and they have a strong preference to eliminate false positives (which they can afford due to the previous point).
I’m not sure why somebody (seemingly earnestly) trying to help potential applicants understand the process should be the target of anti-authority criticism.
I read it as "I'm trying to make sure I don't hire someone who isn't willing to work with me, and the system allows me to do that." I can't speak for anyone but myself obviously, but I think I interview at Uber a similar rate to your average Google engineer (2-3 times a week), and our interviewing process is pretty similar to what's laid out here.
In cases where the interviewer is hiring for their own team, I feel like it's reasonable to assume that the interviewer is trying to hire people they want to work with. Acting in their own self interests, as it were.
In some cases that may mean "I want to make sure you are someone I'm able to collaborate with." In some cases that may be "LeetCoding like hell". Regardless, if that's what the interviewer is looking for, the candidate gets a really important insight in to the team they'd be working for and not necessarily indicative of some larger corporate conspiracy.
Google engineers don't interview people they work with (except rare cases). Interview notes go through a committee, so your job as an interviewer is only to describe how things went with the candidate (as explained in this twitter thread).
Yeah, I call bullshit as well. Try getting to the onsite if you don't solve the phone screen question exactly as expected.
I have a friend who solved/memorized ~600 leetcode questions and got offers from Google, Facebook, etc. He was average, not dumb but not well above average. When I apply to both, I will definitely be leetcoding as many question as possible as well, to maximize my chances.
If that’s all it took maybe those interview funnels aren’t actually that great.
IME, my worst interview process ever was at Google. All I remember from it was some question about burning a piece of string. The coding part was relatively easy.
The best interview process I ever did was at Yahoo in 2008. Everyone was very friendly but the questions were tough. I found the “expert” level of the people interviewing me higher than at Google at the time. It was grueling though - 9am to 7pm. Got an offer to “join any team I wanted”. Turned it down for a startup.
You have two identical ropes. If you light a rope on fire, it will finish burning in exactly one hour. The burn rate is inconsistent - there is no relationship between how much of the rope has burned, and how much time has passed, until the rope has fully burned away.
How can you use the two ropes to measure 15 minutes?
Given that it's printed in cracking the coding interview, Google has almost certainly banned this problem.
> I have a friend who solved/memorized ~600 leetcode questions and got offers from Google, Facebook, etc.
Well then, to go through and solve 600 leetcode questions actually takes a lot of dedication and time investment, which alone to me shows that your friend has a lot going for him. TBH, as someone who has hired a lot of engineers, if I found someone who was well rehearsed, and it was clear this person understood a lot of the underlying concepts in a problem as I dug deeper, I'd be thrilled to hire that person!
The list contains good tips for avoiding getting nervous about something that's just normal practice, wasting time or running into a dead end. If you are concerned about the process, company culture or have any other concerns, use the time for questions at the end of the "main" interview. Interviewers will have better insights into the actual life of an engineer at G than recruiters. They don't have any incentive to misrepresent anything.
Exactly. The bottom line is every interviewer is different and comes with their own quirks and biases. Writing another me too post just because you squeaked through the 5hr gambit isn't going to help anyone. I've met some interviewers (Google and elsewhere) who were incredibly arrogant, some who just wanted to see the solution and copy it down for proof etc. It's not black and white. Something has to change here.
I conducted many interviews while I worked at google and I wouldn't trust anyone who tells you the interview situation as one particular, well-defined, thing. Individual interviewers have their own style, approach, goals. Even if people start from the same place because they go through the same interview training they inevitably drift in goals and interpretation of what the guidelines mean over the years. What your interview experience is going to be like depends on the particular people interviewing you.
By all means write a blog post telling what you yourself is like as an interviewer at google. But if you want to say something broader really ask yourself, what's your evidence that you understand how other people at google, on other teams and in other offices, do it.
This is exactly correct. I conducted dozens of interviews for Google myself, and while most went okay, some did not. Those were failings on my part, and I did my best to learn from those mistakes. That the interview process isn't more streamlined is a failing on the company's part however. Good interviewing techniques aren't formally taught there, and only learned via shadowing and sitting in on other interviews. This isn't just the case at Google either, but most large companies are generally bad at training employees how to conduct interviews intended to draw out sufficient hiring information on candidates. And yeah, interview quality there varied wildly from office to office, and especially across the remote sites.
I've interviewed three times in the last five or so years there, and I don't get the feeling the depth is as claimed in the article.
Perhaps, it's also the interviewers not implementing it as intended.
If these points were indeed important, I'd expect feedback addressing them. All feedback I received was about doing well or not well on certain questions. No meta whatsoever.
I didn't make it for good reasons, but I feel there is a big chance part.
I had one onsite interviewer in the first round who was difficult to deal with.
He presented a problem I've never seen in my life, nothing close. He interrupted and gave hints in directions that didn't make sense to me, and didn't give me 5 minutes to think without him talking.
The second attempt was fair.
In the third, the first phone screen's interviewer was constantly typing on his keyboard while I talked, and it was so loud. No real conversation happening. He was just staring in his screen clacking away. I addressed it, but he kept typing loudly while I was talking.
So annoying.
On the other hand, the recruiters were always very good.
At another company recruiting was a mess, and only knowing someone inside helped dealing with that, but the interviews were all great.
FYI, the reason one of your interviewers was typing is that they need to submit notes about the interview, and it's recommended to take notes ASAP (to avoid any bias post-interview).
No excuse for typing loudly + not stopping after you addressed it however. Sorry you've had a bad interview experience!
For what it's worth, I interviewed with Google twice, and got rejected the first time. One of my on-site interviewers asked me a simple question that has a large number of complex answers. I gave at least 4 different methods (all valid, with different pros/cons) to solve the problem, and they were not impressed at all...
the typing thing is a killer- they practically have to transcribe the entire thing so there is 0% chance of making any kind of human connection with your potential coworker.
I've had Google recruiters reach out to me multiple times over the years and I've never taken them up on an interview. This last time, I told the recruiter I just wasn't interested in going through the process, and his response was a shocked "but why??" Is it so shocking that I don't want to subject myself to grueling demoralizing interviews to likely be rejected? I can get good work, with people I know, making good money, without going through any of that. I'm not interested in the Google Hazing.
I had a google recruiter reach out a couple years back, and I asked for just some ballpark idea of salary range. No dice - they wouldn't discuss even a ballpark range until I'd flown and and given up a couple of days or my time for travel and interviews. I didn't followup.
Great. So if you have ~6 years experience they will quote you a range of 150k-220k for say L4-L6 when your TC (base+bonus+rsu) might be anywhere between 200K-575K[1].
That doesn't sound like a particularly useful answer.
Isn't that exactly what MTS ("Member of Technical Staff") and "slotting" were at google, at least up until around 2012? They hired people not knowing exactly where to level them, and after an introductory period, they were placed somewhere along L4 to L6.
I had to look this up because I'd never heard of it.
Slotting and MTS existed, but indeed hasn't been done since 2011 or 2012, and hadn't applied to L3 or L4 candidates for much longer, which means it likely was a way to differentiate between borderline L5 and L6 candidates or borderline L6 and L7.
you can google the salary range, its not too hard. protip: its pretty low unless you have competing offers then it can be a little larger than your largest competing.
> Is it so shocking that I don't want to subject myself to grueling demoralizing interviews to likely be rejected?
Yes, it is. And this is a problem bigger than Google, this is simply one of the things that are going wrong on this planet. It is however one of the problems that are _relatively_ easier to fix.
I was just reading a thread on Linkedin by a recruiter who lined up 30 candidates based on the client's requirements, only to have the client reject all 30 of them. This problem is rampant and I don't think anyone has a handle on it.
A few observations:
- Modern society is so captured by social media, it's easy to make shallow connections but when it comes time to make the deeper connection, an actual hiring decision, hiring managers simply balk, and Loss Aversion Bias takes over.
- Being able to apply via the Internet creates too big a pipeline and shifts burden to the companies, which don't have the resources to handle the screening workload in a way that shows any consideration for the candidate.
- The shifting nature of technology makes the recruiting business hard. How is the average newcomer to recruiting supposed to be savvy enough to suss out the 12 different ways someone can be a "Docker Expert?" It's just absurd.
- Everyone hates the status quo. Job candidates, hiring managers, people who work in the staffing industry all hate the status quo and wish for a better way to guide talent toward the places where it is needed.
I'm not sure I follow. It's shocking because I should want go through a Google interview even though I have many other more interesting and just-as-lucrative options available to me? I'm not a stranger to taking risks and doing extremely difficult things, but Google has demonstrated to me, anecdotally through other talented and gifted interview candidates who failed, a high degree of randomness and subjectivity that is not proportionate to the difficulty of their interview process.
I just interviewed for an embedded position a few weeks ago. Got a rejection - need to work on my coding and design skills apparently - so take this with a grain of salt.
I felt like there was tremendous variability in the interviewers. Some were disinterested, some were engaging. Some asked relevant questions, some clearly had no idea what my background was or what position I was interviewing for. Some asked canned questions and wanted canned responses. Some explored my experience and knowledge with open-ended questions.
It is my experience that interviewers from countries with rote-learning based educational systems ask canned questions and expect regurgitated answers. Interviewers from western educational systems performed more fluid interviews(where, incidentally, I excelled).
I do not believe Google's current interview process selects for the type of smart people they claim to want. Then again, they're now a big, not-so-benevolent corporation with lots of grunt work(i.e. maintenance, bug fixing) so maybe they just want effective robots?
I'm fully able to admit that I just wasn't smart enough for their organization. Then again, another leading tech company loved me and stuck me on their core design team. Based on my conversations with Google's engineers during the on-site, I did not feel like I was dealing with the best and brightest, whereas at my new company I felt awed by the depth, experience and accomplishments of the team.
> It is my experience that interviewers from countries with rote-learning based educational systems ask canned questions and expect regurgitated answers. Interviewers from western educational systems performed more fluid interviews(where, incidentally, I excelled).
> I do not believe Google's current interview process selects for the type of smart people they claim to want. Then again, they're now a big, not-so-benevolent corporation with lots of grunt work(i.e. maintenance, bug fixing) so maybe they just want effective robots?
I can't speak for other interviewers directly, but the company really doesn't provide any incentives to do a good job or put in a lot of thought to doing interviews.
For example, the coding questions that I ask are not that complex. They mostly involve translating a process that a non-engineer might have to deal with in everyday life into code. This minimizes any kind of selection bias -- if you're giving a general coding interview but the problem involves implementing something domain-specific, e.g. regex, then your results are biased in favor of people who have worked with regex.
I don't know if other interviewers at Google have put in that kind of thought to improving the interview experience.
> I'm fully able to admit that I just wasn't smart enough for their organization. Then again, another leading tech company loved me and stuck me on their core design team. Based on my conversations with Google's engineers during the on-site, I did not feel like I was dealing with the best and brightest, whereas at my new company I felt awed by the depth, experience and accomplishments of the team.
Don't feel so bad. The quality of engineers and managers varies a lot. I've seen some of the best and some of the worst -- there's a huge range.
Yeah right - unless it's changed it was _only_ about solving the problem. Had an interview with them ~7 years ago, unfriendly, hard to understand interviewer asking about weighing boxes of pennies in minimum number of steps. I got it to 2, interviewer said "no, you can do it in 1" then refused to move forward. I had 10 years experience at that time and that was never discussed and also would not answer my questions about working at Google. Recruiter afterward ignored my follow up email.
In theory yes, probably that's the intention. Several other companies claim the same. Interviewers are nice and everything, however this whole process is flawed, and the post nowhere depicts the reality.
Practically, interviewers more often don't care anything other than solving the problem. Some are checking their emails while the candidates solve the problem, some are just trying to give the most obscure problems using exotic data structures, and some are simply too lazy to pay attention. So, I would say whatever the procedure is, there is an enforcement problem.
It is what it is with most companies. So we leetcode like hell, and hope for the best.
EDIT: I've had much better interview experience when the interviewers belong to the team I will work for. I think there is more clear incentive for interviewers to be more interested.
"its not about the problem" but if you cant correctly code one of these 200 algorithms on a whiteboard in sub 15 minutes while under extreme stress youre not getting an offer either.
You don’t get to actually run the code. They just let you type it in a google doc. I guess it’s not done universally, but it was when I interviewed there.
ah yes i'm sure everyone is out there leisurely deriving from first principles their implementation of XYZ graph algorithm which happened to be useful in the course of solving said "open ended question"
speaking of misspresentation, i never said youre "asked" a pure algorithm question only that the timely implementation is a necessary component of succeeding
thanks for explaining how coding interviews work to me thou, really had no idea based on the 200 i've administered in recent years or dozens taken in my career.
Because the people they hire can't do all the algorithms at interview speed either - they were just lucky to have reviewed the right material recently.
i just think its annoying because lots of companies cargo cult google and what thet do is a largely tangential skill set from the rigors of day to day programming work and knowledge. its a ritual from when they were hiring small numbers of graduate students with no real programming experience to measure them on. also the whiteboard is a relic of a time before there were laptops with hotswap 2nd screens.
Because it's not a good measure of your programming skill. To be honest, I legit don't care whomst Google hires, what I hate is that there are all these other people who start adopting these attitudes and it can have a really shitty influence on the whole industry.
I don't think you need to worry about it. Google does interviews like this because they can afford to reject many good developers because they have much more candidates than they actually need to hire. I guess only most popular tech companies can do that. I bet it will never become a standard for the whole industry.
I've heard a lot of really positive accounts on their interviews but mine was nothing like those. My interviewer showed up late and basically spaced out for the entire 40 minutes. He clearly wasn't listening to a thing I was saying - I was verbalizing everything and I had to keep prompting him when I wanted to clarify something. Certainly not a conversation despite everything I had seen and read about their interviews. The cherry on top was when I asked the interviewer at the end what he liked most about working at Google. All he had was that he liked the commuter allowance. Incredible.
Maybe the guy was just having an off day but having spent a month preparing I felt cheated.
> I asked the interviewer at the end about what he liked about working at Google and all he had was that he liked the commuter allowance. Incredible.
the interviewer i had couldn't even manage to come up with anything he liked about google, when i asked. then he went on to complain about being forced to do interviews.
There are probably laws against it (at least in California) but the variation in interview experiences in this thread indicates that Google could possibly benefit from recording (video or at least audio) of interviews for quality control.
When searching for jobs, I actually find the interview process of a company to be an extremely useful filter/signal.
I tend to outright reject companies who have interview processes like Google's.
At its best, it filters for a set of qualities that have little correlation with qualities of people whom I tend to enjoy working with professionally, and at its worst, can be easily gamed by anyone who's willing to play the interview prep game, which further reduces the quality of candidates that the process lets through.
This means even if I end up getting the job at a company with an interview process like Google's, I'm unlikely to be surrounded by people who I enjoy working with.
Even if I somehow get lucky and everybody at the company turns out to be amazing at the moment I join, the fact that their interview process is essentially a crapshoot means that wont stay true for very long as the company scales up and the law of large numbers kicks in.
This seems like a highly optimistic, happy path description of a nigh-impossible process. Testing for leadership in an hour long interview one-on-one? Yeah right. Sounds like a way of hitting your talking point of "we hire for great leadership!" Don't get me wrong, what they say they are trying to hire sounds great, and I think they think they are trying to find those things, but I'm highly skeptical of both the process and effectiveness. Any large corporation has a problem with diffused goals. If you are a small business owner, you interview and have direct skin in the game. You might do a bad job, but you are close to the metal in terms of what you think you are hiring for, and you get the results of your choices. If you personally believe you can spot a leader, then maybe you can, great. But a large company that puts out talking points about hiring great leaders and team players probably can't even elucidate what that would mean (especially when the team member isn't interviewing, so you go for the generic common denominator and average blandness, with no real idea of what it means in particular), and if they can then I'm skeptical they are finding it-- there's a limited number of great leaders around and they typically aren't interested in doing the grunt work of maintaining an obscure cog in an obscure system in perpetuity, reporting to a faceless committee so they can make more ad revenue. You usually find great leaders somewhere more exciting and independent, it's the nature of the beast.
I think these are great advices that would help to move a candidate from borderline hire/no hire to a hire, but let's be honest, you WON'T be hired if you don't solve the problems with a proper answer!
Months of preparation + LeetCode + Multiple interviews are critical to increase your chances to get an offer.
I wonder if they have done the experiment where after some basic screening, a relevant work history and decent personality that if you just randomly admit them if they do just as well as anyone else. You could still do the algorithm problem solving style interviews but just don't make a rejection decision based on them. The idea here would be to randomly approve people who would normally be rejected and see if they do just as well.
That’s sounds like a very interesting, expensive, and borderline unethical experiment. I don’t think it’s fair to play with people’s lives like that, and firing those who don’t work out sounds like a nightmare, but I’d love to see the data that would come out of such a study.
Maybe.. the hypothesis is whether these algorithm problem solving whiteboard interviews test and predict job performance. Plenty of people have argued that they don't. The anecdotal evidence is that they don't.. because the people who fail these interviews are still gainfully employed elsewhere in tech jobs. Google should have some evidence that they do, or else it's a waste of time (although not necessarily for all jobs). I don't disagree with these whiteboard problem solving interviews because they test basically whether you can think like a computer scientist which may be the primary skill they want. There must be a correlation between thinks like a computer scientist and can program. But I think there are plenty of folks who can program even though they don't think like a computer scientist.
I don't think whiteboard interviews do show one can "think like a computer scientist." What they might show is that one can recognize that a toy problem can be solved by some algorithm that was research paper-worthy some years ago. Do you think the authors of those papers wrote them, or came up with the fundamental algorithms in 45 minutes?
Regardless, what companies should be looking for is "can this person do the job." "Can this person think like a computer scientist" is a poor proxy for most jobs. It would be more appropriate to want to find out "can this person think like a software engineer," but, instead, we get "can this person regurgitate problem 257 from LeetCode."
I would argue that the negative signal is probably pretty high from this style of interview. That is, you're unlikely to have someone who's capable of writing down a nontrivial piece of code who can't actually write any code. But, the positive signal, the "can this person do the job" signal, is very very low.
I'll give you an example of a question I've been asked multiple times in a 45-60 minute interview: implement a text-based connect 4 game.
This is a totally valid programming task. That's why it's an assignment in some beginning CS classes. Without such tight time pressure and scrutiny as an interview implies, I would expect any working software engineer to be able to come up with working code in a fairly short period of time. Under interview conditions, where one has to not only come up with working code, but chatter constantly while doing so, it becomes a lot harder, simply because one has to multitask a little. Writing code and talking about it simultaneously is hard, and it's not a skill that's necessary to the vast majority of software engineer positions. Writing down tricky algorithms on a whiteboard without notes is not a skill that's necessary to any occupation as far as I can tell.
I keep reading about how bad Google's interview process is. But has anyone done any proper large scale research on what is the best way to hire software engineers? All the comments here seems like opinions of people based on themselves and no one provides a proper alternative. Just curious!
To be fair, the bit of juiced up kick to the brain doing a leet code problem on the train commuting to work has made me feel much more engaged once I show up.
In my experiences as a candidate it is about solving the problem, actually, despite the fact that we constantly hear interviewers claim the contrary. I'd be curious to know how long it's been since this individual has went through a FANG interview process as a candidate themselves. *
To me this feels like a case of unconscious bias: I think many people who interview genuinely believe they are looking for some set of criteria {A,B,C} but they're evaluations/recommendations actually reveal a bias towards {X,Y,Z}. So as a candidate, after you've been interviewed enough times you realize that X,Y,Z is really the name of the game and posts like these just seem totally out of touch.
* I wonder if anyone has tried introducing some kind of control group or blind study mechanism where some small % of candidates are actually secretly employees of the company. Probably would be hard for in-person interviews because people might be recognized, but maybe more practical for phone screens. The trained decoy candidate could basically attempt a problem in exactly the same way for many interviewers and then data could be analyzed on the outcomes to detect what biases are present.
It's very clear that the interviewer bias is very important on the process. It shouldn't be. On my case, I got a relapsed interviewer that was working on his laptop while I was designing the service he wanted and another that wanted a very specific and canned solution for the very specific and canned problem he threw at me, instead of trying to understand my different approach for the problem.
2 of 5 bad interviewers. I left with a bad taste on my mouth, with the feeling that I didn't failed because I was not skilled / prepared enough for the company, but just unlucky for getting a bad set of interviewers. If was just me with that feeling, I would agree that I was biased for not receiving an offer, but I see this happening a lot with good engineers and throwing all the responsibility on the interviewees shoulders with "You are bitter for not receiving an offer" or "gitgud" arguments is just avoiding the discussion.
Anyway, it's their process and if it's working for them, they have no reason to change. Just stop advocating as it is a nearly perfect hiring process: it's very clear that is far from it.
I like how the article tried to portrait these interviews as super professional and as if they have like a plan or something. My experience is that most interviewers dont prepare, they probably have been asking the same question for years. And no, they are not simultaneously interviewing for multiple skills, the just try to sound professional. If you are a dev, better leetcode. That's all that matters at Google.
I don't work at Google, but I've been doing technical interviews for years and I almost always ask the same question. The reason is I've given hundreds of interviews with that question, so I'm prepared for almost all the ways people can solve it, and especially all of the ways people get stuck or approach it in ways that would probably be useful if you had more than 45 minutes but aren't going to work in an interview setting.
When I've worked with interview plans it was around having the right mix of different technical interview styles and not really technical interviews (like often someone should run through the resume and validate things claimed)
Alright, so here you finally hear it said out loud - it's all about whether interviewers like you, not whether you are a top performer. Increase your likability by any means necessary, don't spend too much time on building your technical excellence, it's pointless and not rewarded if you want to be at FAANG.
Or start a new company, get backing outside their VC arms and challenge them if you want to have fun.
I've interviewed at a couple of FAANG companies, some multiple times. I've yet to get an offer, despite feeling like a (considerably?) better-than-average candidate. I get the distinct impression that their hiring methods get them tolerable sensitivity and excellent specificity [1]. But I have a hard time believing this author's claim that "it's not about solving the problem."
If they reject candidates who don't/can't consider algorithmic complexity of a particular solution, then they'll likely reject some candidates who would have performed well at a job there. But they won't extend an offer to a candidate who wouldn't perform well. Hiring an underperforming employee has several costs. Obvious costs: time consuming gathering evidence to let them go, management doesn't like delivering bad news. More subtle reason: opportunity cost of team morale for a high-performing team. IMO it is exhausting being on a team where everyone frequently chooses the least challenging design options, frequently ignores process/follow-through, frequently ignores opportunities to innovate and improve.
Does the interview process in place actually help achieve this goal? I'm not sure, but I'd wager that it might actually come closer than any other interview process.
All that said, being on the receiving end of these rejections is a real disappointment. But from everything I've read, you may have to buckle down and commit lots of preparation time in order to get an offer. I haven't done that, and I suppose I should expect to get rejections until I do.
Disclaimer: I work for Google and am an interviewer. I'm speaking for myself and not my employer, and not citing any internal data, just my own experience.
I don't know what our offer rate is, but I think it is probably well below 50% even once you make it onsite, so being above average wouldn't be enough.
Our hiring process is also tuned for low false positive rate, at the cost of high false negative rate. Maybe it's optimal, I don't know, but two of my referrals ended up going to Facebook after not getting offers at Google, and doing very well there (one exceptionally so). We definitely should have hired them and we didn't.
This is ridiculous, you will be given a no hire recommendation by the interviewer at a FAANG company if you do not provide an optimal solution. This is very well documented on sites/apps like Blind and accurate from my personal experience. It's corporate double speak to state otherwise.
Can't give an O(N) solution but come up with a correct O(N^2) solution? Too bad - go do more Leetcode.
For most companies, it’s not doublespeak, it’s outright lying. My successful interviews have been 100% correlated with times I’ve gotten to the optimal solution.
Yeah, I struggle with that, too. I've never interviewed with them, much less worked there, so everything I know or think I know about their interviewing process and work culture is second- or third-hand, but it sounds pretty miserable from the outside looking in. Still, they seem to attract a lot of people who really love working there, and they're producing things like Google maps, GMail, self-driving cars, Android, Google Docs, and a search engine... I'd be irritated if I interviewed there and was rejected (although I'm sure I have much worse interview rejection stories from far less prestigious employers), but I'm not sure that would necessarily mean that they're doing it wrong.
i thought they said average score doesn’t correlate at all with how you do once you’re in? a lot of high performers only get in 2nd or 3rd time and barely.
I can't speak for whether that was true in the past.
(If you really care i can bug people for historical data)
However, as of 2014 average interview score was very well correlated with job performance (I can't give you further exact details because they are still marked confidential and i can't find a paper or anything that has disclosed it).
Note very clearly the word average above ;)
Individual interview scores are definitely not predictive of job performance.
Those interested in learning more would do well to read Kahneman's "thinking, fast and slow." Averaging across multiple axes and multiple interviewers gives good signal.
When I get interview feedback about literally not having a coded complete solution despite strong communication skills, understanding of the solution and underlying cs concepts, i really cant help but conclude this as nothing but PR...
Exactly. What interviewers are really looking for is speed and fluency in working through the solution. The easiest way to get there is by already having seen the problem. This leads directly to the “grind hundreds of leetcode” meme seen on Blind. (“LC and TC or GTFO,” in other words.)
I am not sure if this is in the interest of any company but why not provide feedback, as it can only help the interviewees in the future and the companies attract better talent.
Even if it's not standardized, a rubrik across X attributes with a scale and a number. (Which is what I am assuming they are doing anyway -- as it sounds likes the interviewers are rating them across different dimensions)
Not sure if this is some sort of a thing to do with liability but the interviewee can get generalized feedback.
Ex: Technical skills (2/5),Communication skills (4/5) etc. and they can improve on the relevant dimensions or compare notes with other companies / candidates as well.
I went through (and failed one part out of 3) onsite interview at Booking and afterwards they provided me a nice feedback with topics that I should strengthen. I totally agreed with the feedback as I felt I’m failing questions on those topics during the interview. Oh, and the recruiter even suggested to reapply when I strengthen those weak skills.
Experience says that in addition to liability, failed candidates just try to argue and get upset. People claim they want feedback but then claim that the interviewer must have misjudged them.
More work for the company, and leads to even more negativity in the process.
I agree with the possible liability aspect but if they are doing it already, i.e. taking notes or rubric scoring, then it's just a matter of sending a summarized version of that and hopefully it's feedback from multiple people and not just one person's opinion.
The negativity may likely exist from no response/lack of feedback -- which can be frustrating for ppl that spent time/effort), but at least as the previous poster mentioned some people do appreciate feedback.
Also, recently noticed that companies have automated questionnaires asking how the recruiting process was or can be improved -- wondering if there's any improvements based on feedback.
As a side, it's been kinda interesting to read glassdoor responses for how people were treated/felt during the whole process (timing, professionalism etc...)
I meant to post this earlier but forgot to, sorry.
Yes, I can't remember where I came across the data, but providing feedback didn't actually increase interview satisfaction, but decreased it in the aggregate.
Candidates are prone to arguing and trying to nitpick the feedback to get reconsidered. It's counter intuitive, but that's what really happened.
I keep hearing this but it is simply not compatible with my experience.
Often the problems are given without any interviewer at all, for one (the initial screens).
Even for ones where the interviewer is present, if I solve the problem, all is well. If I do not, all is not well. There doesn't seem to be any middle ground where I fail to solve the problem and it's fine.
That is definitely her personal opinion, as she is not a spokesperson for Google as far as hiring is concerned. Neither am I.
However, I think if you ask people who conduct software engineer interviews at Google, a lot of them will agree with her opinion, including myself. One thing I would add, is that there is unfortunately no strong consensus on what to expect from the candidate, so variance seems to be high. That is why candidates are encouraged to try again, if unsuccessful.
This article/blog entry says really nothing for the most part. I know two people personally who interviewed for Google. Neither hired, one got to a sixth interview, the other only fourth. Both told me two exact things:
How do you solve a problem?
How do you solve someone else's problem?
I counter with "who cares?". CAN YOU SOLVE the problem is the winning answer. I think Google is really guilty of wasting their time and the time of the candidates with that many number of interviews without any payoff. I know within 15 minutes if a person is going to a) fit in the culture and b) able do the job. If Google needs more than one interview to figure that out, then they are doing it wrong.
What Google is really doing is providing someone with the privilege of having google on their resume. I doubt it's worth all that.
I suspect all large company interview systems are about ensuring you have a large sunk cost of effort in the ‘debit’ column so that you’ll be pleased to accept substandard terms if you get to the end point.
Much the same as the student degree systems with its non-defaultable loans designed to keep you firmly in the rat race with no time to question whether three hour commutes and twelve hour days are a sensible way to organise a society.
The singular most useful skill anybody can master in IT or any other profession is the ability to write off a loss and not let it worry you.
> I suspect all large company interview systems are about ensuring you have a large sunk cost of effort in the ‘debit’ column so that you’ll be pleased to accept substandard terms if you get to the end point.
I believe this is part of why the technical interview gauntlet has slowly become the norm and persists, is it's usefulness for that, although I don't think it's consciously happening, just a side benefit that those that hire are picking up on subconsciously.
I know that if I'm unable to answer one of their question in an interview I have a hard to escape feeling of "They don't think I'm their ideal candidate" and then even if it does get to the point where they want to extend an offer my brain doesn't want to fight too hard to negotiate a better offer, because "what if they think I was just barely adequate and I ask for way too much?"
It's the wrong feeling to have, but I think it's inherent in human psychology. I think this type of interview process is (intentionally or not) manipulative and reminds me of pickup artist "culture", where they will 'neg' women to cause them to feel bad about themselves, weaken their confidence, and lower their standards.
If you struggled with answering things that you should encounter on a regular basis at your current job, then it's mostly on you (and I have not prepared well and done that before as well for some interviews, like not really be able to talk about projects I worked on when I first started the job because we've done so much other totally different crap since then and I haven't thought about it at all for a year or two). But if you're expected to refresh a 4-year degree + rebecome an expert of every single technology stack you've ever worked on professionally every time you want to find a new job in the hopes that whatever random questions they ask you will be the right things you studied and have fresh in your head, then I think there's mostly bad reasons why this has become the norm.
I also think that since this process helps select for fresh graduates who don't have any idea of their worth, have few responsibilities, no extracurricular hobbies, and are willing to put in an insane amount of time and effort in order to prove themselves is another big reason why this has become the norm.
I interviewed for a Technical Solutions Engineer over there, and contrasting to some of the other comments in this thread the interview was surprisingly reasonable. Nothing too tricky just standard basic log parsing stuff with Python. Didn’t get it but at least the questions were clear, interviewer spoke English clearly, and was friendly overall. Compare this to Facebook where English was clearly not my interviewer’s first language and didn’t explain the questions clearly. I’d gladly interview with Google again.
> Your interviewers try to understand what it feels like to work with you on a daily basis.
I conducted dozens of interviews at my last place. If the goal is to identify jerks then in my experience it is best to create a friendly and comfortable atmosphere during the interview. Simply because people of such personality will confuse friendliness for weakness and start to complain and argue about questions or even mild non-positive feedback. Cool people on the other hand will be able to open up and show their skills more easily.
Definitely includes tech-interviewers. As I get older I've started pushing back on these bad interview decisions, but can't help feel it's a losing proposition without support from fellow developers.
As a category I now decline any questions at a site like coderpad.io where you have to code with a gun to your head. They're simply a waste of time for everyone involved.
On the other hand, I haven't declined any that are given as homework, except one where the guy said it would take 8-10 hours. <= 4 is a firm rule with me, unless they want to provide a stipend.
Two years ago was the most recent time someone asked about my "greatest weakness" and I simply lol'd.
So, while I realize the content of the questions themselves could be problematic, that's not generally what I push back on. Perhaps if the question was totally off the wall rather than merely a gotcha.
I have never interviewed for Google so wouldn’t know if that’s actually how it goes. But that sounds like how I run interviews myself. I think of myself as a fairly good interviewer, and I’ve accepted people who totally bombed the actual question, but in the way they behaved and talked and thought seemed to be a good fit.
Hiring is hard, and I do think asking hard questions (on any subject really) does show a lot of the candidate...
I perform interviews from time to time for the company I work for and my conclusion is as follows: interviews are hit-or-miss for the most part and the only way to improve on that is to spend a prohibitive amount of time on them.
Also the Dunning-Kruger effect is strong among interviewers. Especially given that as a person in this role you're given this tiny bit of power that is necessary to enable it.
The Silicon Valley interview system is a major blind point for these companies: a structural weakness which internal dogma paints as an unquestionable strength.
It's like the Five-Year Plan in the Soviet Union. You could criticize it, but it's so internalized in how the place works that there's no way to get rid of it without blowing up everything.
A lot of google employees might not pass the interview if he/she is going through the interview for the 2nd time.
because it all depends on the questions he/she comes across. the interview process is testing do you know what i am going to ask and that's all that matters.
Hiring continues to be a favourite whipping boy on HN and honestly I kind of wish it would die because it's the same arguments every time:
- Inconsistent interviews
- Luck of the draw questions
- "I don't do well coding on a whiteboard" (often framed as "coding on a whiteboard proves nothing")
- Bad experience with the process
- Etc
Personally I don't mind coding on a whiteboard but only if you understand why you're asking a candidate to do it and what you hope to gain. Unfortunately many (IMHO) get this part wrong.
Obviously FizzBuzz was influential here. And I honestly think FizzBuzz is the right way to think about live coding tests because it's simple. It's deceptively simple such that anyone competent easily falls into the trap of thinking the question needs to be harder and this is a problem with many FAAMG interviewers.
On the other side I think there are people who don't realize how many people are masquerading as programmers who can't program a for loop in their language of choice. It's actually hard to believe for anyone semi-competent unless you've witnessed it but it's true.
FizzBuzz is a simple problem aimed at providing an early negative signal on a candidate. Every word of that was deliberate and important. It's simple so anyone remotely competent will pass it within minutes and you can move on.
This doesn't mean that if you ace it you're a good engineer ie there is ZERO positive signal here. The negative signal is if you can't solve this simple problem in your language of choice because then you almost certainly aren't a good engineer and you (the interviewer/employer) can stop wasting your time.
This is why it's so important it's a simple problem because a hard problem adds very little positive signal and greatly reduces the negative signal. Some people are bad under pressure with hard problems. Some questions are a matter of knowing the trick. Finding cycles in a graph is trivial if you are familiar with the tortoise and hare algorithm. If not you may figure it out from first principles but if you don't it doesn't mean you're a bad programmer or you shouldn't be hired. That's the problem.
On the other side some like to lambast interview processes if they have a nonzero false negative rate. These stories usually go "I referred excellent engineer X and they bombed out on a random coding question" or similar. This happens but getting a false negative doesn't invalidate the system.
It's important to remember to that the goals of the employer and the candidate are different. The candidate's goals are to try to get a positive signal to the employer. The employer's goals are to minimize time spent per candidate (since this is expensive) while hiring a sufficient number of qualified candidates with a minimum of false positives.
Again, every word of that is important. False positives are expensive. If you have a pool of 100 people to fill 10 roles and 20 are will work out then, as the employer, you don't often care which of those 20 you get, as long as you get 10 of them. There's an effort-reward curve between getting 10/20 qualified candidates vs the best 10.
Lastly, this is also why there is an interview slate of 4+ interviews. A single bad interview does not kill your chances.
I'd say Google's biggest problem is interviewer dead wood. These are people who have their pet questions, which were banned years ago (as either being too well known and/or just being a bad question) but they keep asking it anyway. Or they know <pick your language> and then force the candidate to use it and then mark them down for not knowing it (when the candidate never claimed to know it). Part of the delays in Google hiring too are some people will do an interview and won't submit feedback for 1 or even 2 weeks, a process I personally found inexcusable and infuriating.
But interviewing is one of those things that everyone is expected to do, which needs to change as many people are either bad at it or just don't really care.
I would also say, if anything, Google (and this probably applies to most if not all big tech companies) doesn't filter often and early enough. I saw candidates who never should've made it passed a phone screen. In some cases I saw the phone screen feedback (as part of the whole packet) and I honestly don't know why it didn't end there. My own theory was that recruiters largely controlled this and they had quotas to meet of phone screens, onsite interviews and hires and this just ended up wasting a lot of interviewers' time. But that was just a theory.
The problem with FizzBuzz is that it's too well known. So it's basically something which has very little signal because people can just google it and memorize the solution.
Two of my favorite phone screen questions (now unfortunately banned because the have been identified on various web sites like Glassdoor as being Google interview questions) were "validate a UTF-8 string" (the interviewee is given the UTF-8 rules), and "add an integer to a bignum" (the interviewee is told what a bignum is if they don't know that term).
Both of these are really simple programming problems that the experienced coder should be able to knock off in 5 minutes. The absolutely terrifying thing is that there are fresh graduates with a CS degree who couldn't deal with either of these in the full 45 interview slot. I'm not sure what colleges are teaching these days, but it's certainly not programming as I know it...
The reason why these were my favorite phone screen questions was if the candidate couldn't hack a question like that, I could very confidently write up my interview report and tell the recruiter --- don't bother with the expense of bringing the candidate on site and asking 4-6 software engineers to spend 2-3 hours interviewing the candidate and then writing up a comprehensive set of interview notes/report.
The point is "like FizzBuzz" not "actually FizzBuzz". This could just as easily be:
- Add all the odd integers in an array
- Count the number of vowels in a String (bonus points if they ask about Unicode vs ASCII in the context of what constitutes a vowel). ASCII is the easy case. Unicode is a little more involved. Handle upper and lower case (they should figure out this is an issue).
- Given a set of Strings find all the characters (or words if you prefer) that are unique to only one of the Strings
- Given an ordered sequence 1..10 and the operators {+,-,/,*} where you can put any operator between two numbers but maintain the number order and operator precedence, find the number of operator combinations that yield 5 digit positive integers. Brute force is totally fine. Pick any category of answers you like in fact. Writing this one will take longer than FizzBuzz but the important point here is no specialist algorithmic knowledge other than how arithmetic works is needed and there is no special trick memorization.
I wouldn't even know where to start for the mostly trivial western European languages, and that's without leaving high bit ASCII territory.
I'd probably end up making a list by hand of every possible vowel. But that's not doable for Unicode. What does it even mean for things like ideographic characters?
> the experienced coder should be able to knock off in 5 minutes
An inexperienced one will knock off in 5 minutes.
An experienced one will know that the code you'll write in 5 minutes will be way too slow, by an order of magnitude slower than good library implementations.
I enjoyed my interview at Google. They asked some generic questions, but also some stuff specific to my niche.
No one is expecting perfect code. I ended up saying something along the lines of “let’s just pretend the method is called isPresent() because I forget” to every interviewer.
The problem is that it isn't inconsistent and arbitrary, so that would simply be wrong to admit.
There is a tremendous amount of data/studies/etc to back this up, done repeatedly over time.
Both independent of Google and not.
Certainly much more than any other interview system i've seen suggested or used here.
As for the second, general "Googliness" is not used to turn people down - horrible communication skills/being an asshole/etc are.
Despite your claim, one interviewer noting an issue there will not get you not hired. Two or three noting it would.
One noting it and then followup fit calls/interviews confirming it would.
So it's not an effective mechanism for single interviewer to enforce biases anyway.
Whether you work there or not, I'm afraid you are misinformed.
Three of my friends (all similar backgrounds) interviewing at Google this year have had completely different interviewing experiences. One even bypassed the phone screen and was flown to Mountain View for an IRL technical screening/interview. These are all for the same role.
I don't see the confusion. I'm saying its inconsistent and arbitrary. You're saying it is not.
I'm providing an example as to how it is inconsistent (friends experience) as well as an a priori rationale for why it's arbitrary (e.g Googliness is a fuzzy criterion that allows interviewers to make any arbitrary call).
I'm deeply unimpressed with Google's hiring process (I think it works for them because their roles are coveted, not because it's especially good). But you haven't established that their process is "arbitrary", only that it's inconsistent. In particular: I don't see any evidence you've provided that "Googliness" is the reason one of your peers got to skip phone screens.
If You Judge a Fish by Its Ability to Climb a Tree, It Will Live Its Whole Life Believing that It is Stupid
- that quote that is contentiously attributed to that person
(If u ask me, prolly the fish will become an amphibian lol)
I really dont hope that writing blog posts on twitter with one tweet per sentence and then threading them together becomes the norm rather than the exception, but my expectations are low.
This information is for outside and inside the company. When you interview someone you cannot go with a predefined answer that you want the candidate to achieve.
companies do not care about the quality of their hiring process. which is sad because they do care about the quality of their employees.
they say they care, but it’s lip service. if companies cared, they would put in place a candidate feedback mechanism, ie post-interview quality survey. you could start with only surveying those you extend an offer to.
the original post is totally BS.
These days, Google's interview system is more like getting better/cheaper replaceable labor/commodity. particularly with huge supply of new fresh graduates.
Why does everyone on HN seem to care so much about what Google does in interviews? We get it, no one likes weird off-topic brainteasers. Be a better engineer so you don't have to worry about having them on your resume and you'll be good.
Thanks for posting this! I tend towards the much more conversational interview to get the candidates talking, though this has some handy examples of going beyond the candidates CV and drilling down into the technical details that I'd like to appropriate.
I also have the tendency to take a bunch of whiteboard pens etc into interviews I'm running in case the candidate explains things better using visual aids.
Do people realize that there is no effective solution to this problem of hazing like interviews?
There are probably millions of High IQ people applying and vying to work at Google. Tomorrow Google could decide to add juggling competitions in interviews in addition to algorithm jargon and while it would lead to a lot more bitching from interviewers, it wouldn't matter to Google as they would still get their fill of qualified high IQ applicants.
You decided to latch on to one possible inaccuracy in what I said and disregard the overall point. Even if we agree that millions is not correct, 100s of thousands or thousands of people? Point is for every 1 qualified person hired at Google there are probably 100s of (at the very least) qualified people rejected. So Google can be unbelievably picky before any they see any bad effect from their hazing interviews on themselves.
The problem is that there is almost no correlation between how candidates do in whiteboard interviews, and the ability to perform the job on a day to day basis.
Also, and the article says it: you are being tested mostly for personality compatibility with the interviewer, which again has nothing to do with the ability to perform the job.
This plus the fact that the interviewers are volunteers, creates overtime an effect where lots of people in the workforce have similar personality traits, which is not a great thing because it creates an imbalance in the normal personality balance that a group needs to work properly.
Personality imbalances in groups usually lead to dysfunctional situations over time: too many sloppy people who just cowboy code until dawn, too many excessively cautious people that won't touch a line of code any more with fear of breaking something, etc.
I don't think the pseudocode whiteboard exercises are pointless, I think they're a useful filter. You can make them simple, and anyone remotely competent who isn't completely crippled by nerves should be able to zip through them.
They don't have to be about esoterica. I just want to know that the candidate can attack and break down a problem, even if they don't know how to do all the bits immediately.
And I want them to have heard of a hash table before.
I've switched to 1-2 hour homework exercises on the advice of an experience colleague though.
The problem is, there is a huge disconnect between hash tables and the work we do every day, so why to use that as a criterion?
Homework exercises seem like such a better way to go, its much closer to the actual work experience. And its easier for the interviewer, all they have to do is to assess if the person actually did the exercise themselves.
Although hash tables are used internally by libraries, they are not directly used by the application developer. When was the last time you had to hand code a hash table?
Unless you are a library developer, its an example of a task and knowledge that will never be needed. If its something that is never used in that particular job, why ask it in the interview in the first place?
Well, our disagreement is just a matter of miscommunication then.
I never said anything about hand coding a hash table.
I would like prospective candidates for programming jobs to know that there is a thing in their language of choice that lets them make fast lookup tables.
> The problem is that there is almost no correlation between how candidates do in whiteboard interviews, and the ability to perform the job on a day to day basis.
What else would you use as metric? Talking about technical topics?
In my experience, there are people good at talking about those things, but miserably failing at the actual job. Introducing simple whiteboard programming tasks in addition has helped weeding those out and dramatically improved hiring success.
There are some very solid alternatives to this current system, which lets face it some of things that they make you do in an interview are borderline ridiculous and its a matter of lucj more than anything else.
For the technical part, I would as screening value forms of unforgeable proof of technical competence, such as Github profiles, meaningful open source contributions, stackoverflow contributions.
Next for the technical part, a homework assignment. Homework is much more close to the real world experience: get a task and would you have to figure it out.
Then the technical interview would be to explain the homework. There the only goal is really to confirm that the person did the homework themselves rather than anything else.
For the behavioral part, a full battery of psychological tests, to weed out psychopathic traits mostly.
There are alternatives, the current interviewing system is not a good solution.
Sure, give someone a problem slightly harder than fizz buzz just to make sure they aren't a complete fraud.
But the difference between someone who can solve an easy whiteboarding problem and a hard one is merely the fact that they practiced for months doing it. It is unlikely that solving the hard problem is much of a signal other then that.
> The problem is that there is almost no correlation between how candidates do in whiteboard interviews, and the ability to perform the job on a day to day basis.
> Also, and the article says it: you are being tested mostly for personality compatibility with the interviewer, which again has nothing to do with the ability to perform the job.
For a majority of candidates, there is a correlation (both aspects). It might not the best or even a "good" way of determining suitability for a job (as, say, doing some actual domain work), but you cannot just deny that there is a correlation.
I get into the interview and the person on the other side seems to know very little of my background. He says he is a PM and starts with how much is google's spend on storage for youtube on an annual basis. Knowing very well, I walk through assumptions like the average youtube video size, no of formats based on screen res and video quality etc etc and give him the logic. He pauses and says give me a dollar value. He doesnt want to understand the logic behind the calculations. Anyway, next few questions are more of the same.. code optimizations etc etc. After 3 or so questions, we were done. No, do you have any questions for me. No customer related discussions. No what I have done in the past and how I've been successful.
I feel like these kind of interviews are not judging what the person brings to the table, rather do you know what I'm gong to ask you and that's all that matters.
I always look for 2 things in any interview. Are you smart and motivated because nothing we do is rocket science. If you are smart and motivated, you will succeed. The other is, will I (and the rest of the team) get along with you. Teams need to work together and people who lack tact in personal skills end up being very difficult to work with.