I'd never be good enough to be a Googler. It's because of 2 things:
1) I graduated from college long time ago, and have worked in a BigCorp for 6+ years, where the toughest technical problem all center around CRUD, nothing that requires any deep algorithmic, machine learning, or NLP techniques (ala building a search engine).
2) During my free time, I hack on my startup, and although the problems are more challenging, I tend not to spend too much time trying to optimize things, or come up with the most efficient, elegant solutions because of time constraints. I just ask myself: is this good enough and move on. Because of this, I develop bad habits that would never be allowed in Google.
When you're trying to build services for hundreds of millions of people, optimizing gets pretty key, and it's often the harder part than building a MVP or "getting shit done". Ask Twitter.
I'd never be able to be a Googler either, for nearly the opposite reason: I left College a long time ago for a job. I have since spent 8+ years working on tough problems that are highly algorithmic, and tons of optimization work. Yet the lack of college is a deal breaker AFAICT.
Actually, considering the horror stories we have heard about Google's hiring process, it might be more appropriate to say: "the only winning move is not to play."
Whenever you see a selection system that is very specific and continues to build specific criteria - this happens in corporate policies, admissions, interviews, and also seen often in evolution - your suspicions should constantly be raised. The more specificity, the better systems get at optimizing for a few local maxima while potentially ignoring other (or most) maxima entirely.
For example we have a fairly "locally optimized" single column spine for something that stands upright, but if we were to design something that stands upright (such as a building) we would rarely use a single column.
Anyway, I wonder what Google does to mitigate such a threat from their specific systems described in the article.
Further I wonder if they collect any data that might tell them whether their process turns people away, because I've heard lots of horror stories (especially on HN) about the Google interview process. The article doesn't seem to mention whether or not the interviewees enjoy the process. I suspect nobody (at Washington Post or Google) bothered to ask them.
(Not that I can blame Google, I've never heard of a company asking recently hired employees about how they could make the interview process better. But I figured the WaPo would have interest in writing that story)
The article ends with:
> To make sure they don’t miss out on top talent, Google employs a team of full-time screeners to sift through applications. The company would not say how many people are employed in these roles, but it said the group is "sizeable."
I suspect at the end of the day their screeners and recruiters are about as "average" as other companies of their size. Numbers certainly wouldn't make up for quality. Just my anecdote, but here's the first part of the email a Google recruiter sent me:
> I'm a talent scout on the Engineering Staffing Team at Google. I came across your details and feel that you could be the sort of person we are looking for to work in a new role we're hiring for in our Mountain View HQ, called 'Performance Engineer' for which we're looking for a candidate with a combination of compiler, high performance software design and computer architecture experience.
I'm not known for much, but for what I am it's exclusively JavaScript.
> Further I wonder if they collect any data that might tell
> them whether their process turns people away, because
> I've heard lots of horror stories (especially on HN)
> about the Google interview process. The article doesn't
> seem to mention whether or not the interviewees enjoy the
> process. I suspect nobody (at Washington Post or Google)
> bothered to ask them.
There's a fair amount of internal discussion every time a Google Hiring Horror Story comes up in the media. It's worth noting that in those stories, the interview day is generally the only part that goes well. It's all the other stuff (especially scheduling and candidate communication) that needs work.
Not that I can blame Google, I've never heard of a company asking recently hired employees about how they could make the interview process better.
Google sent me a survey after I turned down their job offer (back in 2006). Among other things, I wrote "this recruiter is nearly single-handedly responsible for me not coming to work for you".
A couple years later I heard that she had gotten promoted.
Google does have another outlet, which is acquisitions. When a company is acquired, generally, a big chunk of the employees get hired without the normal interview process. Some of them are essentially on probation, and if they perform well, they are made permanent. Others end up not getting converted.
Google also hires contractors and interns, and some of those get 'converted' to full time employees, essentially with the contracting/intern period as one big job interview.
Whether the employees get interviewed at Google or not depends on the size of the acquisition, at least for engineering positions. That's largely for logistical reasons. The vast majority of acquisitions of software engineers involve interviewing.
> The firm recently generated buzz in the talent industry
> when it said it had done away with the notorious brain
> teaser component of its interviews after statistics
> showed the ability to ace them had no correlation with
> success at the company.
Brain teasers have (to my knowledge) never been used in Google interviews. They're an urban legend which used to be said about Microsoft, and will no doubt be said about whatever big companies come next.
Journalists: "$COMPANY gives all its new employees wedgies!"
$COMPANY: "We do not think giving employees wedgies is useful."
Journalists: "$COMPANY generates buzz by doing away with wedgies!"
It's been a long time since my interview (2005), but I had a couple of brain teasers during my my interview. Not as many as I had at Microsoft, but definitely some. I specifically remember being asked the Two Eggs question (http://www.datagenetics.com/blog/july22012/).
The two eggs question seems fair to ask an engineer. It's an optimization problem, and could just as easily be rephrased in real world terms (e.g. branching in a hard-realtime microcontroller). The solution can be arrived at through math and reasoning.
Typically, trick questions are things like "how many golf balls fit in a bus", "you've been shrunk and dropped in a blender" , or the infamous "why are manhole covers round". These supposedly test the candidate's creativity, but actually just test whether the candidate has read a book of riddles recently.
I disagree with your categorization of "how many golf balls fit in a bus" as a trick question. That's a normal estimation problem that's highly relevant to a software engineer's daily work. (Rephrase the question as "how many users can you serve with one CPU core". Capacity planning is an entire department's worth of work.)
Adding features to your software can drastically reduce throughput per CPU core. Even simple architecture refactoring for better scalability can drastically reduce throughput per CPU core (just as you have to pick availability versus consistency, you also have to pick scalability versus throughput).
And hence, because it depends on what the software does and how it evolves, the question cannot be answered unless you stress test a CPU core. The question itself, as phrased, is even stupider than how many golf balls fit in a bus.
The question is not a literal question, just some words meant to evoke in the reader's mind the relevance of estimation to software engineering. For a detailed discussion on this subject, see Chapter 7 of Programming Pearls: http://www.cs.bell-labs.com/cm/cs/pearls/bote.html
OK, yeah, you're right. But I think care should be taken in the interview process to have topics diversity, which is another thing that bothers me with Google's interview.
I participated in Google's interview once. After 2 phone screenings with algorithmic questions I was invited to a full day on-site interview involving meetings with 5 people, all of which asked me to solve algorithmic problems. In the end I was told that I did OK at the first 3 meetings, but not so well on my last 2 (actually I was able to answer all the problems given to me, but my performance dropped at some point due to getting tired and the last 2 people probably had other questions they wanted to ask, but couldn't due to lack of time).
Personally I recognize the importance of algorithms, data-structures and reasoning about asymptotic complexity. It's stuff that some are learning since high-school and should be common knowledge for all of us.
But what about other things of importance, like actually being able to build and deliver functional software? What about things like the quality of the code you write, or being experienced in building scalable systems, or being capable of working in a team, or having personal projects? I was never asked (maybe it was just my chance to end up being interviewed only by people that cared about algorithms).
Of course, it can be said that these practices worked well until now for Google. After all, there are plenty of really capable people working there. But maybe that happened in spite of the interview process, not because of it (e.g. one reason could simply be that resumes with internal recommendations have priority and people at Google are good at networking and recommending other good people).
Please tell me you guys do capacity planning with more than just vague numbers you pull out of your ass, a whiteboard, and some guy staring at you making subjective judgments of the number you come up with.
Yes, but we don't expect someone to use actual numbers in response to a hypothetical question in an interview. The question is intended to see if you could figure out which numbers you'd really need to figure out the right estimate.
Often, the specific result of an estimate is less interesting than the user's understanding of what drives that estimate, because it determines if they understand the levers they can pull to change the outcome in the real world.
You don't want to ask the actual question about the specific application, because it gives a strong advantage to a candidate who has worked on that very specific situation in the past (and they may be able to sneak through just by regurgitating from memory.) It's better to have a scenario they're unlikely to have seen previously. As an interviewer, I'm not interested in your ability to remember solutions you received from others, I want to know how you deal with a problem you haven't seen before.
Sometimes folks call these "puzzlers" but I think they're unfairly lumped in with true "puzzlers" - crap like "why is a manhole round" where there's a right answer that depends on either experience or some specific insight. A question where you're asking a candidate to walk through their thought process on a hypothetical (but reasonable) situation isn't "puzzling" unless you just can't handle the question.
Well that was a little condescending. If you think it's vitally important to screen people for their ability to solve Fermi problems, it's no skin off my back. I was just trying to have a conversation.
Man, I don't necessarily think they should be used in technical interviews, but I really don't get the aversion to these kinds of questions - stuff like this is so much fun to think about and debate!
I personally think manhole covers should be human-cross-section-from-above-shaped.
I think what is always left out of these snippets in articles is what job roles they are referring to when they talk about the interviews. Sounds like these teasers have never been used in software engineer interviews but have been used for product management or others.
This struck me as exceedingly slow (I know that my friend got an offer in 10 days start to finish, but he's probably an anomaly since he had multiple offers already).
Is this in line with other software companies this size? I also wonder if this varies with function.
That fifth person is a “shadow interviewer” who is simply training to conduct interviews for future job seekers, and that person’s analysis isn’t included in the decision-making process.
This is hilarious! It's just like the ETS (education testing services) that runs all the US based standardized tests.
45 days seems exceedingly slow to me as well, based on my experience with another large tech company. Though I can't say for sure the end-to-end time of the whole process, we can generally get from an onsite interview loop to a hiring decision within 24 hours, and HR takes another day or two to generate the first verbal offer. So that's what, three days? As I recall it didn't take anywhere near 42 days to go from resume submission to onsite interview, either.
It's not really slow if the "time to hire" is the time from an inbound resume, to an outbound written offer (or even longer, an accepted offer). (A verbal offer is great, but nobody generally accepts a verbal offer, they have to sign some kind of agreement to formally accept the job.)
The screening process takes a week or two (14 days) and then you have to schedule an on-site interview loop, which can often take a week or two lead time (since many interviewers have to travel to SF, and you have to schedule the interviewers as well) and that only leaves a week or two at best to issue a formal written offer. I've seen it take a lot longer than in total at many good companies.
> (A verbal offer is great, but nobody generally accepts a verbal offer, they have to sign some kind of agreement to formally accept the job.)
Well, you'd want to call someone on the phone and get a verbal OK before sending them the offer letter, right? From verbal to written shouldn't be more than a 24 hour turnaround.
So that's the day of the interview, the day after the interview to make the decision, and three more days (so the rest of the week) to generate and send out a written offer that the candidate may or may not have already verbally okayed. That's 5 business days, or 5-7 calendar days.
> The screening process takes a week or two (14 days)
14 days to look at resumes and do a phone screen or two? Maybe at maximum, if you have to bounce the resume between teams to find the best fit.
> and then you have to schedule an on-site interview loop, which can often take a week or two lead time (since many interviewers have to travel to SF, and you have to schedule the interviewers as well)
Why would the interviewers have to travel? Don't you interview people on the same campus where most of the interviewers actually work? Why would it take two weeks to book a couple of hours on k different people's calendars (hopefully most of them selected from a pool of size n >> k)? Sorry, I think that timeline is way too padded.
No, it can be a long time between a verbal and a written, because the verbal offer doesn't contain all the compensation details and requires final sign off from whoever makes those decisions. In almost all the corporate jobs I've gotten, I got a verbal offer relatively quickly, but the formal, signed offer letter came a long time (anywhere from a week to a month) later because it required the signature of VP or someone similar.
Re travel: I meant that the candidate has to travel, and the interviews have to be scheduled with the pool of interviewers. The minimum time for this is generally a week, but usually it takes longer. Interviews are a significant / immovable commitment and interviewers are required to give notice if they can't make it several days ahead, which means they have to be scheduled well before that.
> Is this in line with other software companies this size?
Large companies, yes. Not sure about software.
And the thing that struck me as Monty Python ludicrous is that no one from the actual team is interviewing you! Almost every large company and most smaller companies have you interview on site with your actual team members. Is Googlish really that strong a predictor of team fit?
Shadow interviewers don't do a separate interview. They typically observe an interview done by an experienced interviewer and then write up feedback.
It is primarily meant to be a learning experience for the shadow interviewer. They will get to see the feedback written by the primary interviewer, and the primary interviewer will usually critique the feedback from the shadow interviewer.
The feedback from the shadow interviewer isn't actually dis-carded - it gets sent to the hiring committee, with the indication that it is from a shadow interviewer. It can be given less weight by the hiring committee members, but I've been on hiring committees where the shadow interviewer picked up on something that the primary interviewer missed that affected the hiring decision.
I very much enjoyed the opportunity to interview at Google and to see some of the inside of the campus, but articles like this seem disingenuous to me.
My first interview on-site was delayed due to room reservation issues, so it started late, and this continued as a theme throughout the day.
That's anecdata, but I have to wonder whether whatever schema and/or processing pipeline are allegedly in place to remove human bias would be able to understand issues like this or of other kinds - i.e. environmental and/or human failures outside the scope of the process. These systems don't tend to critique and monitor the environment, just the data that is entered into them afterwards.
This plus the admission from a few Googlers that personal recommendations do really make a big difference make me a little jaded when reading articles like this - I'd much prefer to see admissions of honest weaknesses alongside all the positives, but I guess we're still some way from treating anything except severe security breaches that way in the software/marketing industries (this article is in the latter industry).
Edit: I feel like I should add a bit of context about why I feel the article is misleading, given that my post was inspired by personal experience.
The misleading aspect is that the article tends to portray the process as striving towards an unbiased science, whereas my perception is that bias is still part of the decision-making process (arguably for good -- personal recommendations can be very positive indicators, as long as they're not from an old-boys-style network), and I feel that there is insufficient measurement and understanding of interview factors to make it a science (i.e. exhaustion/travel factors, cultural differences, personal factors, etc - which I don't think affected me, but are still a real part of interviewing).
NB: When I say old-boys network, I imply any kind of non-meritocracy which simply aims to get people 'in the door' without full vetting; I believe this is possible regardless of gender, but just that's the term I know to describe it
Out of Google, Amazon, Microsoft and Facebook, Google was the only company that straight up declined to interview me because I did not meet GPA requirements (3.0 cumulative - I had a 3.4 at the time in CS but I hadn't done well in freshman chem or calc 3)
I like that they decided to stop asking brain teasers due to a lack of correlation between performance in them and performance once hired. Do they really think that cumulative GPA has a strong correlation on new-hire performance?
I certainly don't.
(I expect to get some push-back from you guys and I'm interested in the discussion to follow :))
when it comes to hiring graduates I believe the GPA averages are put in places as a first cull.
these companies get so many graduate applicants each year they place an arbitrary requirement on GPA just so they can slightly reduce the number they take to the next stage of the interview process.
Google does the same kind of software interviews as anywhere else. They aren't special. You code on a whiteboard and it's supposed to be compilable in C or Java. The process beyond that is just as subjective as anywhere else and is largely based on gut. They depend largely on employee references/friends.
The data they collect is really only to make the process more efficient, not more effective. They reward employees that process the most phone screens in a month. The strangest thing, in my opinion, is that the interviewers usually don't make a decision at all, they just give a rating. Then, a group of people who've never even met the candidate decide whether to hire them based on forms that were filled out.
| Google does the same kind of software interviews as anywhere else. [...]
| You code on a whiteboard and it's supposed to be compilable in C or Java.
It may seem like a strange idea, but not all places interview like this. The company I currently work for doesn't do this and the one I will start working for soon doesn't interview like this.
The reasoning is that my job is not to stand in front of a white board and write syntactically correct code without the aid of an editor or compiler, so maybe there is a better way to screen candidates that directly test the skills they will use on the job.
There's a difference between syntactically-correct code (meaning every single paren/brace is balanced, no dropped semicolons, should compile exactly as written, etc.) and what I call "valid" code. Expecting a candidate to write the former without an editor and compiler is silly.
A good interviewer (and good hiring committee) wants to see that you can write "correct-ish" code: it looks like it would compile modulo a typo or niggling detail, but the algorithm, data structures, and control flow are clear and valid. A good interviewer will also tell you that up front, e.g. "I'm interested in your code, not your syntax". If they don't, ASK!
The problem is: not everyone is a good interviewer, and it's surprisingly hard to teach someone to BE a good interviewer.
I think I agree with all of that. Especially the "interviewing is hard" part. But what's wrong with having someone sit in front of a computer to write some code?
At the company I'm leaving, we ask candidates to do a small project as a first pass. If the code isn't awful, we call them in to pair program with us (we're a pairing shop) on the code they wrote to improve it.
The company I'm joining just has you come in and pair (they're also a pairing shop) on projects their team is actually working on for most of a day.
I find both to be pretty good approaches. Better than having someone write code in an unfamiliar setting (the white board) to solve problems which are often, but not always contrived (write quick sort, etc.), at least.
We've had pretty good success with identifying good candidates since we switched over to this model and the place where I'm starting is a consultancy which is pretty well-regarded in the startup community, including HN, so they've probably had reasonably good success identifying talent.
How big are these companies, how many people do they hire per week, and how many do they interview? Any number of perfectly reasonable interviewing strategies in smaller companies fall apart at big-company scale (1000+ resume submissions per DAY)
Fair point. The companies are small and medium-sized. I'm not convinced that those models can't scale, though. One large-ish company that I know of who still does something similar is ThoughtWorks. They have on the order of thousands of employees and their interview process consists of, among other things, a take-home code submission and on-site pair programming.
If you're getting 1000 resumes per day, I think you would be able to weed out a significant number of them just by giving them a coding assignment to work on. Churning out a resume is easy, but sitting down for a few hours and writing well-designed code takes effort.
They probably can scale up to a point. Thoughtworks has 2100 employees. Google has 45,000.
How would you grade/score ~1000 code submissions per day, though? You could conceivably do Coursera/Topcoder-type automated grading, but that can only get you so far and can't distinguish good code vs. bad code vs. "copied from Glassdoor" code. It might be useful as an initial filter, though. You'd have to constantly be implementing new questions w/ associated grading scripts, though, as the problems would inevitably leak.
This isn't really true, you can use whatever language you want for the programming sections. If you say you only know PHP, they'll find you four interviewers that know PHP.
Interview feedback is not really "filling out a form", either. I take about 6 hours to write feedback for a 45 minute interview. It's very detailed and really covers a chronology of what happened during the interview. Interviewers also submit a hire/no hire score for the candidate, so it's not only the hiring committee that makes the hiring decision. If an interviewer is willing to say "this is the best person I've ever interviewed in my life", that's a big deal.
I've been in the recruitment pipeline at Google 3 times over the last ten years. Each of the latter two times they had no record of the previous recruitment. In the one case where I got to onsite interviews the first set of interviewers had somebody else's resume with my name on the top. It took me most of the morning (because its an all day process) to convince them that I was not actually the piece of paper they were waving at me.
I'm curious about what is meant by the elimination of "brain teasers." Could it be that the brain teaser has just shifted to code questions?
Part of the challenge of a technical interview is to get at someone's coding ability without resorting to what are essentially brain teasers disguised as computer science questions - and I'd expect a lot of disagreement around where you draw that line.
Here are a few I've been asked:
- Print out the fibonacci sequence recursively.
- Print all permutations of a string (using recursion).
- Swap two integers without creating a third integer.
- Out of several million database entries, select a few at random to display to the user. Don't repeat any until they've all been displayed.
- Implement mergesort, code a singleton, print a binary tree in order, add a branch, find a cycle in a linked list...
Which of these would you consider brain teasers, if any? I'd say the "swap two integers" is the closest... but if you're including a lot of these questions, are you still in brain teaser land?
I'd love to see some google data on what types of technical questions correlate with job performance, rather than simple "brain teasers".
It's interesting that they mention speed when they're notorious for taking months to get back to their candidates. 45 days doesn't seem like a number to be all that proud of! I suppose the sheer volume of applications makes things harder, but surely companies like Microsoft and Facebook get lots of applicants too.
In my last interview, they took over 3 months to get back to me. In the end, I emailed them.. seems like my HR manager completely forgot about me. Nice.
When I interviewed with Google (April 2013), they got back to me a week later, which felt perfectly reasonable. On the other hand, the process of getting to the interview stage was ridiculously, unreasonably prolonged.
It just sounds complicated for the sake of being complicated. I can't see how being impartial is really helpful here - surely they want people to form relationships and figure out whether someone can actually work on a specific project or problem?
Impartiality is very important when hiring skilled workers. If your hiring process is not impartial, then it's introducing undesirable bias into the hiring decision.
The classic example is sexism in orchestras. Orchestra interviews used to consist of the candidate sitting on a stage and performing their piece for a panel of judges. The gender ratio of performers was terribly skewed, even worse than the software development field is today. Orchestra managers excused this by saying that women were simply not as good -- after all, the judges scores don't lie!
But a funny thing happens if you put the performers behind a screen. Suddenly, all the factors of their gender and race and grooming go away, and new performers start being a lot more diverse. There actually was a bias, a serious one, and it caused orchestras to lose who knows how many excellent candidates.
The current state of the art for programming interviews at big companies is to have the candidate solve actual programming problems, including whiteboard coding. It's not practical to put the candidate behind a screen or modulate their voice, so splitting up the tasks of "interview candidate" and "review interview feedback" is the best that Google has figured out.
You must realize that your orchestra example doesn't apply here. What they did is clean-cut and simple removal of the gender bias. Provable.
Compare this with what google is doing. It would be the equivalent of having orchestra candidates write an "original" tune on paper in 20 minutes and then talk about select topics in music theory and then submit that to a figure skating judge panel.
It's also worth pointing out that Google wants to hire people that will last for more than one project. It's great if you're hired for a specific project, but what about when that project's done?
Since it's data driven, your opinions are nice but apparently not correct; that is, apparently, people are more successful in the company when they are hired by impartial interviewers than by biased ones... The End.
Haven't RTFA, but I hope this is a joke. For a company that boasts making all of it's decisions with solid scientific data, it's hiring process is an emotional ass grabbing parody of twelfth night.
1) I graduated from college long time ago, and have worked in a BigCorp for 6+ years, where the toughest technical problem all center around CRUD, nothing that requires any deep algorithmic, machine learning, or NLP techniques (ala building a search engine).
2) During my free time, I hack on my startup, and although the problems are more challenging, I tend not to spend too much time trying to optimize things, or come up with the most efficient, elegant solutions because of time constraints. I just ask myself: is this good enough and move on. Because of this, I develop bad habits that would never be allowed in Google.