Hacker News new | past | comments | ask | show | jobs | submit login
Why 37signals Doesn't Hire Programmers Based on Brainteasers (37signals.com)
605 points by teaspoon on Jan 5, 2012 | hide | past | favorite | 423 comments



I helped start the practice at ITA Software (acquired by Google last year) of using puzzles as part of the hiring process. We ultimately even put our puzzles in the Boston subway as recruiting ads. My assessment, after more than a decade of doing it:

- Good puzzles are actually a talent attractor; many smart people found out about our company via our puzzles.

- Good puzzles are ones that scale well; i.e., where the basic problem is pretty easy and can be solved by a decent programmer in a few hours, but with harder variants that can take much longer.

- "Take-home" puzzles (as opposed to in-person whiteboard tests) weed people out who don't really like to program and/or can't finish things; this is a useful filter. If someone complains that "they shouldn't have to spend three hours writing code to get an interview," that itself is a pretty good counter-indicator. (Yes, we are all busy. But you're talking about starting a relationship with the company that may last 10+ years. You can do three hours of prep work for your interview. And you have to code a fun puzzle in the language of your choice, not some subroutine in a 30-year-old COBOL banking system.)

- It seems that in-person whiteboard or locked-in-a-room tests are pretty poor indicators of success. Many good programmers put in this situation significantly underperform their true abilities.

- It's not a great idea to evaluate someone purely on the basis of puzzle-solving ability.

- Many one-liner puzzles are bad indicators, because you either need to "know the trick" or have memorized the answer. Many, but not all, Microsoft and Google interview questions I hear about -- I've never interviewed at either place myself -- sound like they fall into this category. (E.g., from a recent article I read about Google: "You're reduced to the size of a nickel and thrown into a blender. How do you get out?" I don't care if you're clever enough to answer that; I just want you to able to write enormous amounts of high-quality code for me.)

Im my experience, the best way to find out if someone is a good programmer is to talk to him/her at length about something he/she has built. Can he/she talk in detail about how it worked? What challenges were involved? What tradeoffs were made? And, above all, was there passion behind the work and these choices?

A good interview for me was one where the candidate came in having written or contributed to a large system -- for fun or work -- and was excited to tell me about it. It didn't really matter whether it was relevant to the work we did (airfare search). Anecdotally, it seemed like people who really loved to code generally worked out pretty well. People with great-looking resumes that didn't love to code -- but maybe loved other things, like arguing over which language, operating system, architecture, or business strategy was better -- usually didn't produce much.

If the first thing you want to do when you wake up in the morning is code something, you're probably going to do well as a coder on a hard-core software team. Otherwise, not. Everything I personally did on programmer-hiring strategy was a proxy for figuring out whether someone was like that or not.


Maybe I'm weird, but I find puzzles to be a turnoff. I love writing code, and I enjoy working through a math book.

But compared to coding and math, puzzles feel sterile. I don't get to build anything, and I rarely get the kind of insights I would get from figuring out a proof. (Raymond Smullyan's puzzles are a marvelous exception to this.) Even cryptanalysis is more fun, because there's a real opponent.

I know lots of amazing programmers who love puzzles, and that's cool. But if you only hired puzzle lovers, you'd miss out on a lot of good people.


You're not weird. Puzzles can definitely be fun, but, in my experience, it doesn't mean you're a lousy programmer if you can't solve one/some.

Everyone likes to use Google as the example of the elite programming company. I've read the books about them, used the products, followed their progress over the years. So, sure, a part of their big success is hiring the 'smartest' college grads.

However, I've also worked with Ph.D.'s and even interviewed them for positions lower than ones I've held (I have not yet finished college - some day...). And I can tell you that book smarts != execution smarts.

I feel the same way about puzzles. I have never used puzzles or asked 'small' one off programming questions. Instead I've always been interested more in grep'ing the concepts and if the personality is the proper fit. In my mind, anyone can open up a book and read about specifics about O(log n). But how does a person persist at solving a problem and how do they (can they?) change direction if their solution sucks?

I feel like I value creativity over anything and they can creatively solve a puzzle, cool. But if they can creatively solve a task on the system I'm building and are a great fit to the team, then better.


Same here. My impression is that the puzzle lovers turn out to be the algorithm geeks. You need a bunch of those, but I can't see how a room for of complexity analysis experts necessarily creates well-structured software.

I'm an API design geek. I've never seen a cool puzzle about API design, or something remotely related.


i'm pretty sure one of quora's questions is/was designing an api for something in python (probably time, given how bad the current api is).

also, replying more to the other post above - you can get insights from puzzles. a lot depends on the puzzle (which is why ita is getting praise here). for example, the ita puzzle i remember taught me a lot about handling data in large numbers of dimensions (in particular, it made me think about the "curse of dimensionality" which was something i was aware of before, but hadn't considered in detail; it also led me to understanding some tools for dealing with it - locality sensitive hashing being the most interesting at the time).


Most puzzles are not very interesting. :) But some are surprisingly rich, and those are the good ones to put in front of candidates in my opinion. E.g., I worked on one puzzle before I left ITA that involved laying out train track pieces to create a complete circuit within a rectangular region. (I don't believe this one has ever been used, but I could be wrong about that.) It was pretty interesting to solve this heuristically, but it turned out to involve something much deeper: the theory of so-called "lattice animals" -- an active area of research I knew nothing about.

Lattice animals include polyominoes, which are probably familiar to most HN readers, but subsume other graph-related objects as well. I didn't know much about these, and ended up reading a bunch of cool papers on them as part of working on the puzzle. So at least for me, good puzzles have catalyzed some learning.


I don't think that's too weird. I am never very interested in solving puzzles. Myst was a major turnoff, for instance. The Project Euler problems are usually "meh".

My mathematical preference is continuous math; my preference for using mathematics is to apply them into real problems.

I'm not saying that all puzzles are pointless, but I think most interview puzzles are kind of an empty exercise that only a certain type of person likes, and that only people who know the trick will get.


For a good programmer, a 3 hour take home test is like a $300+ application fee. Even if the puzzles are reasonable, filters work both ways. For a recent graduate or a mediocre programmer, it's probably worth the time for a shot at the job, but talented people will have plenty of options at good companies that don't make them jump through extra hoops. Perhaps it's fine if you pay a lot higher than average or offer significant perks, but I could see a strategy like this unwittingly driving away the top tier of candidates if you aren't careful.


That is a reasonable theory, and I was always personally very concerned about that, but in practice it didn't seem to be the case. It seemed that the opposite occurred: people who didn't even want jobs did the puzzles just for fun, and asked us if they got the right answer.

Of course, we couldn't easily do A/B experiments, so we'll never really know how the true attract:alienate ratio.

Trying to look at it objectively, I would probably be somewhat put off by being required to do a puzzle to "prove myself", but would then find the specific puzzles offered to be pretty interesting, and worth playing around with. Perhaps it's not tantamount to a $300 tax on ones life it is seemed fun and valuable in its own right?


I have less disdain for pre-interview development filters and have seen some companies do it pretty well. I think your assumption about people who would find it fun could hold some merit. But I do believe the parent poster is right, those really good developers are rarely going to bother. They are usually to busy in the first place. They may see your company and think oh cool I like these guys and then hit the pre-interview filter and decide it to much investment for just a chance and that's the core of it, on both sides it's a numbers game, we can say well they did not really want it or we are only getting the ones that really want it, but you just may be getting the ones that are not working the odds out in their head. Or you could be getting the ones that are desperate and will invest the time for a chance. There are a lot of assumptions in these type of tests.


I think in the end, most people just want to have their abilities evaluated fairly. And even if I'm not a total puzzle freak, I do love to code, and given time I can figure most of these puzzles out. So if I were looking for a job at your (former) company, this would not constitute a barrier for me. On the other hand, if I were verbally asked a question and told to work it out on a whiteboard, this would not be an accurate indicator of my abilities or my potential. Add to that the fact that my handwriting is poor, and I would be nervous, and I'm quite sure a verbal description of pascal's triangle would almost certainly elude me, whereas if I were shown a document with the written output, I think I could come up with a naive solution in 5-10 mins. I'm good at sitting down by myself with a written problem and working it out. As a result, I've always done well on standardized tests. In fact, I suspect this is the reason for some of the indignation at the whiteboard testing. People who have always gotten good grades, coded well, done well on standardized tests are suddenly told that they're not smart enough. I can understand how it would rankle.

Note: edited typos.


It isn't like that because paying money to someone in exchange for merely being evaluated is an easy way to fall for a scam: the evaluator has a significant incentive to offer this deal to people who will fail.

It is much more like having to travel 90 minutes each way to get to where they want you to interview. That could be very offputting if you do enough interviews that three hours each adds up to a big number, but at least you don't need to worry about them being tempted to rip you off.


I interviewed at ITA in 2007, and ITA did the puzzle process better than all but a few other companies I've interviewed at, for the reasons you mention. But don't discount the in-person whiteboard puzzle. I was told I had the second highest score on the puzzle solution I sent in, but I flopped the whiteboard puzzle part of the interview, and ended up not being hired. I think this was a good thing - I didn't have the algorithm skills and knowledge of the other people in the QPX group (QPX is ITA's main business, their search engine - they actually recommended in advance that I interview for their reservation system group based on my resume, but I decided to go for the more brainy one). If I didn't have the whiteboard interview, I would have gotten in and probably regretted it later (much later, I realized that I would have regretted working at ITA anyway, or anywhere in Boston, but that's totally unrelated).

During the same job search, I also noticed Facebook had puzzles. I solved one and got invited to interview but never went. I wouldn't have applied otherwise. I guess this is another lesson - if you put public puzzles out there, expect people to send solutions without necessarily being excited to work at your company.


I once was asked the SAME puzzle at two interviews (for different companies) two weeks apart.

I'd never seen it before the first interview. In that first interview, I broke it down quickly into its component parts, and figured out where the trick had to be -- and from there quickly solved it on the spot.

In the second interview I spent a couple minutes trying to remember the solution from the first interview, and completely flubbed it. It wasn't the first part of that second interview that I flubbed, either, including arguing with one of the interviewers and basically telling him he was wrong.

Oddly enough, I got the job with the second company and worked there for almost six years. For the record, I love doing those kinds of puzzles. But I do video game development, so maybe puzzles are a bit more relevant to my industry (in that 80%+ of what we're doing is figuring out how to solve puzzles in the form of how to do something with less CPU or why a particular video card is behaving in a certain way).


I saw those puzzle ads frequently in Kendall/MIT and Harvard (what, 6-7 years ago?). It was very clever recruiting on your and ITA's part, and a much better way of using brain teasers than as part of an in-person whiteboard session.

I don't think you give yourself enough credit in the success of the puzzles. They were really well-conceived problems - just difficult enough without requiring too much domain knowledge. Very 'catchy', if that makes sense.


Thanks. We were very aware of the marketing aspect of the puzzles, and designed them accordingly. The ones that made it into the T were ones that had visual appeal as well. It wasn't just coders, either; marketing and graphic design worked played a big role as well.


Are the ITA puzzles like this archived online anywhere? They sound fascinating.


They're a lot of fun! You can find them here: http://www.itasoftware.com/careers/puzzle_archive.html


"You're reduced to the size of a nickel and thrown into a blender. How do you get out?"

If I were asked such a question in an interview, my response would be: "Dunno. Let's talk about some real-world problems you're facing right now instead."

I dislike trivial questions, especially during an interview. I'm a professional, not a circus animal.


Or maybe just walk out of the interview. If there is a possibility I will be shrink-rayed and thrown in a blender, I'm not interested.


3 hours of prep work X 5 applications (1 application a day for a week) = 15 hours of unpaid work per week, or 60 hours of unpaid work per month. Given that a typical job search lasts multiple months, this is not an insignificant workload to saddle potential employees with. However, if your company is special in some way and people feel it's worth the rigamarole to apply there, then the market will bear it.


I don't see how this is relevant. Where did the parent poster say that you should be doing this 20 times per month?


> I don't care if you're clever enough to answer that;

Bingo.

To me, this is the kicker statement for the industry. Solving fake problems like nickel-in-a-blender is for the clever. But I sure as hell don't want clever code solutions; I want rock-solid, reliable, fundamental solutions.

Anyone who feels these questions are an indicator of programmatic success is measuring the wrong things.


To be fair, the "nickel in a blender" type of problems have been actively banned from engineering interview at Google for many years. That doesn't mean that there aren't a few holdouts that still use them, but there is an active effort to eliminate them.

I sit on an engineering hiring committee and I can't really think of the last time I saw one of the ridiculous puzzle questions show up in an engineering interview. The WSJ article was pretty ridiculous.


Wasn't implying these to be solely a Google issue, but rather something that a material number of firms in the tech industry like to follow in interviewing.

Sorry, but I'm not familiar with the WSJ article you've referenced. Link?



Yes. That's it exactly. Thanks!


but i suspect that ita were exceptional. first, your puzzles were interesting - i imagine it's quite hard to think of interesting puzzles (how did you do it?). second, ita had a very good reputation so people were very motivated to work there. third, you may have been solving harder problems (and looking for more problem-oriented programmers) than many other companies.

i have done your puzzles for fun, but i would still be annoyed at having to do half-baked puzzles if i were suddenly out of a job and applying to a bunch of places.

(ps. and thanks for them :o)


I agree that the puzzles definitely have to be well-designed. There were a few of us -- myself included, for a while at least -- who created most of the puzzles.

We also never put a puzzle out to the world until a bunch of us had solved it. There was a long gestation period, and there's a "puzzles-discuss" mailing list for working out the kinks (and, in many cases, rejecting certain puzzles entirely). So, yes, you have to make a real commitment to do it well.

On the other hand, developing the puzzles was a lot easier than most of the coding on our real system, so it's all relative. It was actually really fun to do this stuff, and a nice change of pace. One of ITA's best developers (Justin Boyan) would take it upon himself to create the "perl postcard" solution for each problem. Another (Jim Rees) would always make an incredibly fast C++ version; it usually ended up being not only the fastest of ours, but faster than all the submitted solutions as well. I learned things just by reading his puzzle solutions. :)

I found the process rewarding beyond the hiring benefits, and I'm sure I'm not the only one. (Well, I hope so, at least.)


I'm jealous:) sounds like a lot of fun. When they came out, I used the ITA puzzles to shake the rust of my lisp skills and even considered sending some of the solutions in and applying, but that would have been a major career change.

In any case, kudos and thanks to you - those puzzles were a lot of fun, and I always wondered where they came from.


Personally, I failed the ITA puzzle. (Either I failed it or they looked at my resume post-fact and decided I wasn't smart or qualified enough... probably a combination).

So, here is my opinion as a non-halfway-decent programmer.

I thought puzzle was fairly fun, but I like programming for two reasons:

* Building useful knowledge. Learning a new language, library, or algorithm I want to play with, programming is the natural way to learn.

* Building cool stuff that people like. Having an awesome product when I'm done with it gives the feeling that I've done a good job and produced something of value.

Programming puzzles don't really do either of these things for me. My motivation for spending three hours doing a puzzle is that I would like to talk to a person. I am basically assured that the puzzle will be used as a filter for the company to avoid talking to people. It will run against a test suite and if it fails it will not be looked at by anybody. This seems very impersonal. Is this the experience I should expect from the rest of the company if I'm hired?

I've got a Git-hub account with some of my code in it. I've spent time on it and enjoyed writing all of that code. I published it to the internet so I'm (at very least) not ashamed of it. Why can't you look at that to determine if I can program? Why do I have to write code specifically for your particular company?

This isn't an attack on ITA, it seems to work for them. But they are a unique case of a company where it seemed worth it for me to spend the time to do the puzzle. Any other company I would have just skipped it unless I genuinely had nothing better or more interesting to do (of which I almost always do).


>If someone complains that "they shouldn't have to spend three hours writing code to get an interview," that itself is a pretty good counter-indicator. (Yes, we are all busy.

When I applied at ITA they didn't even acknowledge the program I had written to go along with the application. Not even a quick 'it ran' or 'had the right output but we hired somebody else'.

I'm glad I didn't get the job because not having enough time to evaluate or even run a programming test you gave is also a pretty good counter-indicator of a good company to work for. And when I wrote back to express this the guy sheepishly admitted they had so many exceptional applicants so they just went with an Ivy-league graduate. Wow.

If you are going to require a programming test, be prepared to evaluate it.


If you are going to require a programming test, be prepared to evaluate it.

I agree. It is possible the reviewer didn't think your code would run, couldn't get it to compile, etc. But, nevertheless, you should have received a fuller response than that. If I were still COO at ITA, I would ask someone to look into your particular case to understand what happened.


Wait a minute—the ITA puzzles were supposed to be solvable in three hours? I wish I’d known that before I tried to solve one.


Is that too long or too short? I would like to understand what you mean by this.


Like jsnell, I spent weeks’ worth of nights beating my head against the problem I selected, and the solution I eventually came up with was not good enough to earn me an interview.

I gather that someone with a stronger theoretical CS background than myself would have spent an hour looking at the puzzle, said “Ah, this can be modelled as a graph and solved by applying the Flibblewhiz Heuristic!”, spent another hour coding up a first draft of the solution, and a third hour optimizing and cleaning up the code.

My ego has survived the blow of learning that I am not awesome enough to work for ITA, but I wish I had figured that out sooner.


I don't think I had a satisfying solution to any of the puzzles in 3 hours. In the most extreme case, I saw some people spend a couple of weeks solving the two-time pad problem. Even after knowing how to cancel out the pad.


Two-time pad was considerably harder, I think, than average. What about the ASCII maze one? I always thought that one was pretty accessible.


Ah, fair enough, there must have been some simpler ones there as well. It just was not the kind of thing I would have hacked on for fun :-)

I was thinking more of all the cool search problems like Strawberry Fields, Traversing the T or Palindromic Pangram, where coming up with some solution is easy, a naive exhaustive search for the optimal solution was not feasible, and where verifying the validity of any heuristics / pruning was not necessarily trivial.


Are you responsible for the puzzle involving the historical sites search with a google instant style interface? This was a long time ago and it's no longer in the archives.

I worked on that one for a looong time, and what bugged me was the screenshot of the expected interface had a result returned in like 0.05675ms or something, and I couldn't figure out how to get it that fast. It took be forever to convince myself that it was just a mock and probably should have read 0.05675s (seconds instead of milliseconds).


I didn't work on that one, no, but I wouldn't assume the time was fake either. :)

We often had internal solutions (and externally-submitted solutions as well) that were orders of magnitude faster than the average. There was a very wide range of execution times. E.g., we had solutions to the Queens & Knights puzzle (in the archive) that ran in tens of milliseconds, if I recall correctly. Typical solutions we received took orders of magnitude longer.

Usually when one solution was a million times faster than another, it was because the fast solution exploited a clever algorithm or data representation -- e.g., representing an 8x8 chess board mask as a single 64-bit register rather than as an array of bytes.


That puzzle is here: http://www.itasoftware.com/careers/work-at-ita/hiring-puzzle...

(The sub-millisecond time was for real)


Ah thanks! However, I'm worried I may take another crack at it now!

I may have missed that the time was the time taken for the search algorithm to find results, I think I was under the impression that should be the time for the request/response to happen. Hmm.


I must personally say thank you for starting those puzzles. Over the years I have implemented many of them as fun weekend projects, but one puzzle in particular stuck with me: Sling Blade Runner. On and off the last ~four years I have worked on the sling blade runner puzzle creating dozens of different approaches for the problem. From genetic algorithms, visualizers, various traveling salesmen "solutions", stuffing the graph in the cpu cache and letting it run for a month to document how far the dumb solution wouldn't get, custom algorithms doing wacky things such as spherical gravity water flow simulations and many more I have pushed that data set through a lot of things. When I read books and papers and constantly working on the SBR problem in the back of my head. Generic graph theory can be a boring topic, but for some reason with this problem and data set to play with it has been something that has stuck with me. The generic problem of solving the longest path in a directed circular graph is NP so program solutions are all about find something good enough given limited time or exploiting issues with the data set to actually solve it. There is a lot of payoff in the industry for good enough solutions to this problem (which I am not actually involved in, I hack on webkit during the day). It is the puzzle the keeps on giving and as I encounter new techniques on the job or in reading I often end up trying later in the evening to see how well/fast they apply to this problem. I moved to Boston with an existing job (same one I am at now) so I never really got to go through the whole interview process at ITA, but always thought it would be fun to help design those puzzles. So again thanks for the puzzles, especially Sling Blade Runner!


E.g., from a recent article I read about Google: "You're reduced to the size of a nickel and thrown into a blender. How do you get out?"

Many of the articles about Google interview questions on sites like Business Insider, at least for software engineering positions, are total bullshit. http://www.technologywoman.com/2010/05/17/debunking-the-goog...


I got asked 3 puzzles for my current job. It was quite annoying. One, I knew off the cuff. The other 2 were bad questions. One was caught in the "if you don't make these assumptions your wrong" trap. For the other, the "correct" answer was actually wrong. It really pissed me off.


Funnily enough, the only time I've been asked a genuine logic puzzle in an interview was at ITA (don't remember the details, but it was something about ants walking on a 1d line and bouncing off each other). It even had some kind of a "either you see it or you don't" aspect as well.

But the programming puzzles on the website were just awesome, even though I just solved them because they were fun, not for applying for a job. I've been surprised that nobody else has done the same thing, but your explanation of how much work went into the puzzles kind of explains that :-)


Just to clarify, ITA's puzzles weren't all of mathematical / algorithmic tricks variety.

I did one of them just for fun 3-4 years ago: "implement a scalable multi-user chat server without using a framework like Netty/ACE/EventMachine". Unfortunately, at the time I couldn't move to Boston, so I never submitted my solution.

Likewise there were puzzles that would appeal to web developers (e.g., build a typeahead web applications), machine learning/AI folks/etc...


It seems you like you still hire based on puzzle as a first filter (based on on your careers page). Just wondering, what is the ratio of people who apply with only a resume vs those who solve the puzzle (I understand you probably can't reveal). And, more importantly, are those on the puzzles side show to be more successful on the interview?


I'm sure someone knows that ratio, but not me. :)

But the puzzle process, as originally conceived, (in 2000!) was not necessarily supposed to be a hard filter. As an applicant, you had several ways to stand out, and doing a great job on a puzzle was one. Having someone who already worked for us say you were awesome was another. Being an open source superhero was another. Etc.

In actual implementation, though, it probably was/is used as a hard filter.


In actual implementation, though, it probably was/is used as a hard filter.

It always becomes a hard filter, there is a lot of ego in development and test are an area where ego's shine through. It's my major beef with quiz exercises in interviews, they always become the hardest filter, no matter how much we try to contain them.


So people who handle these stressful situations well and solve these problems under pressure only represent one type of developer - which Google has in spades - the kind who does very well in a structured environment, got great grades in their undergrad program, loves to worry over interesting and difficult problems, etc. The kind of people who don't do well in these situations (like myself) are often valuable in other ways. Some people cave under pressure, and even more people have very low self-esteems when it comes to programming and intelligence in general. Yet these same people can be highly valuable and productive given enough time to make mistakes and solve problems in their own way. These types of people may be better at seeing the bigger picture and are less likely to over-engineer a solution or fall into the not-invented-here trap. I think it's interesting that Google finds a lot of bright engineers using these high-pressure hiring tactics, but that they often turn to wholesale talent acquisitions when it comes to building products like Google Plus. My guess is the vast majority of the talent Google has gotten through acquisitions would have failed miserably going through Google's traditional hiring process.


My guess is the vast majority of the talent Google has gotten through acquisitions would have failed miserably going through Google's traditional hiring process.

That's a very good point that I've not seen made so well before.


The variant (and converse?) I prefer (though not mine):

After so many failed Google interviews, I found it easier to build a company and sell it to them, instead.


I've heard that Larry Page could not pass the Google interview today.


This is probably a lie. The Google interview is not particularly hard. It might be hard if you absolutely refuse to ever let your eyes look at anything that could possibly be considered an "algorithms book", but otherwise... it's stuff that you will know if you care about computer programming at all. The questions are about applying big concepts, not implementation details of particular algorithms. (I couldn't write A* from memory, but I do know what an "admissible heuristic" is. And that's good enough to be hired by Google.)


which Google has in spades

I would argue that they don't and I believe that it is due to their hiring process. Now I do believe that they have very talented individuals but there organization reflects their interviewing bias as such so does their product portfolio. There is a reason that they have to acquire new products for their portfolio and that reason is that they are not recruiting the kind of people that generate new products. As such their hiring process should be suspect.


Indeed. Hiring programmers is about hiring people who bring business value to your organization, not about bringing in people who can solve Pascal's triangle faster than the other 90% of candidates. The entire first thread on this page is a shining example of what is broken with Google's hiring process.

37signals hires for business value because, as a small company, it has to. Google is a huge company and doesn't have the same incentives. Google's size can blind it to the fact that its hiring practices optimize for mathematical knowledge and intuition over business value. And, as a business, Google is wrong and 37signals is right.


> My guess is the vast majority of the talent Google has gotten through acquisitions would have failed miserably going through Google's traditional hiring process.

I don't believe that's the case. What I've seen with talent acquisitions (at Google and elsewhere) is that interviewing the team is actually prerequisite to making the acquisition. Of course, the interview doesn't have to be as rigorous: during the due-diligence process you can actually view the codebase and see the individuals' contributions.

What I've also seen, specifically in the case of Google, is that individuals from acquisitions would be asked to stay a year, at the end of which they'd need to interview for other positions at Google: if they couldn't pass the interview, they'd be let go with a very generous severance package.

That said, I am sure there are exceptions.


When I give a live programming problem, I cut interviewees a lot of slack for not doing well under pressure, as long as they seem suitably embarrassed about it. There's a big difference to me between "I know this step has got to be really simple, but I'm having a brain cramp", and "This step is really hard, no wonder I can't get it" (when it isn't).


I'm curious why my comment has been voted down. Was "suitably embarrassed" an offensive way to phrase it, or do people disagree with the distinction I made in the last sentence? I think it's pretty valuable to have a sense, even if you can't solve something, of whether the solution is likely to be simple or not.


I think the problem people have with this approach is that they don't want to be defined by their performance on one very specific problem. Most people, myself included, want to be seen as the sum of many accomplishments, not just one superficial one in an interview.


Oh, I totally agree there; I mentioned in a different comment that the whiteboard coding problem is just one component of many that we use.


This topic as a whole seems to be very contentious, and stuff is getting downvoted seemingly randomly. Even comments that are interesting and that nobody could surely be objecting to.


I think that in most cases Google will interview the employees of the company to be acquired. Either in advance to decide whether the company is worth buying at all, or after the fact to decide who can keep their jobs. The bar might be set slightly differently in these cases than for normal hiring, but certainly not so different that the vast majority of the acquihires would have been "miserable failures" otherwise.

(Also, I've never heard of G+ being in any way a product of a talent acquisition.)


My own experience is that if you're really interested in getting into companies with recruitment processes like this, you might want to be uninterested when talking to the recruiter, talk down your level of interest, question whether the company has anything worthwhile to offer you, be open about your reservations about their recruitment process, and detail your issues with any interviews you do with them.

Generally whenever I've done this to recruiters their interest level goes up and they tend to be far more willing to listen to your concerns about the interview and go to bat for you. And recruiters tend to have a lot more ability than you might think in getting parts of the technical interviews largely ignored by hiring managers - including at Google.

Incidentally my impression from having talked to several Google recruiters is that a decent number of them are incredibly frustrated with the way the process work, and will work hard to get you past this hurdle if you want them to and provide them with enough ammunition (e.g. explain why the question was inappropriate for the position).


So Google runs two complementary techniques in parallel. Is that bad?


Hiring developers based on brain teasers is like hiring journalists based on how well they do at scrabble.


Couldn't agree more! I will quote you in the future.


I like this analogy, because if you are hiring journalists, you may also want to hire a crossword editor.


My first interview out of college was for a position as a Mac developer for a company that made a PC email product (for Novell's LAN message exchanger -- this was in 1990). I'd done several business-focused Mac applications while in college, including a really complex EDI app, and I thought they'd be interested in seeing them as part of the interview. I even brought selected portions of source code.

Nope! Instead they gave me an hourlong verbal interview, which went great, took me to lunch with the team, which went great, then sat me down with a lead developer, who had printed out some of the winning entries from the obfuscated c contest, and asked me to read them and tell him what each program would do when compiled. That did not go so great. I didn't get the job.

That experience remains burned into my brain, and as a manager I won't give puzzles to an interviewee. I'll ask to see code and expect them to speak intelligently about it & the choices they made, but I won't ask them to write code (or, compile code in their head) on the spot.

(Postscript to my story -- that company never shipped a Mac email client. I believe they interview-filtered themselves out of the entire Mac market.)


One of my biggest pet-peeves with the current trends in industry hiring practices are the puzzles.

Mainly because I do bring in lots of source code. I write tonnes of code in my free time. I contribute to open source projects on occasion and have released some too. It's all out in the open. All my blog posts, everything. I'm an open book when it comes to my ability, what I've worked on, etc.

Yet 95% of the interviews I've walked into the people sitting across the table from me had to look at my resume while they were shaking my hand to find out my name.

Then the puzzles come in the hopes that I will be filtered out quickly so the person interviewing me can get on with their day.

I also find that in such an intense situation my brain is hard-wired to find the "right" answer so that I get what I want without getting burned. It's kind of hard to have a jovial discussion about code and logic when you've got a lot riding on getting the job. It's almost Pavlovian.


Amen. The only reason why I ask people to bring code is to discuss it. It doesn't even really matter if it's their own code, or if it's good code. Being able to reason about code and discuss it with others is a much better indicator of a developers ability and potential.

There's always the probation period to see if they can actually write code and solve real world puzzles. I've never met the mythical applicant that could bullshit their way through a technical interview.


After considerable reflection upon this point, I tend to fall on the side of those that say that a really _simple_ problem, like FizzBuzz, is quite valuable as a negative filter. It saves one valuable time in dealing with people that, as it turns out, can't code at all, literally, but are incredibly good bullshitters. There are those. But that's about as far as I think whiteboard code quiz shows should go.

It took me a long time to appreciate the utility of something like FizzBuzz in terms of how Pareto-optimal it is.

I think it finally clicked when I was getting interviewed for my US citizenship by USCIS in 2008. In the preliminary stages of the application process, the officials had gotten us all psyched up about how there is going to be a US civics test and an "English test". We were given a 100-question booklet of Civics 101 and a lot of anticipatory anxiety.

So, imagine my surprise when the interview came around and with it, the storied, fabled English proficiency test. You know what it was? Listening to one simple sentence read aloud and transcribing it accurately. It wasn't even a hard sentence, it was something on the order of "The man walked his dog across the yard and over the turnpike". Really? Really?! I was flabbergasted. Come on, I could pass this in like 4 languages. One sentence? That's all?

After thinking about it, I realised there was more utility to it than I had appreciated, though. Yeah, I've been speaking English for 18 years, but there were people in the room who literally did not. They could not pass this test. And for someone who really cannot speak, write or understand English much, this test would be genuinely challenging or impossible to pass. At the same time, 90% of people who can pass it probably are sufficiently proficient in English to function in this society. I'm not arguing that it's a particularly insightful, perspicacious or revelatory test, but the point is that it accomplishes more than you think, from a statistical perspective, even though anyone who is even marginally literate would dismiss it as ridiculously easy to the point of absurdity.

So it goes with "a priori" programming tests, I think.


Funny, every time I read "puzzle" or "brainteaser", I think, "I don't know why manhole covers are round," or "Who gives a shit that it took 7.612 seconds for my program to identify the 78,498 prime numbers less that 1,000,000." (Yes I know, get a life.)

But every time I read "FizzBuzz", I drop everything and go back to refactor my latest version, aspiring to get it down to one conditional.


    >> (1..100).each { |n| puts `curl -s fizzy.heroku.com/#{n}` }


win!


Thank you for inspiring me!

I revisited my FizzBuzz just now and threw together this obscene Python one-liner.

(If I write this kind of code in production, I should be a guaranteed "no-hire"!)

    from itertools import chain, combinations
    fizzbuzz = lambda n,terms: ((lambda d: d.update((x%d,w) for d,w in sorted(reduce(lambda (d0,w0),(d1,w1):(d0*d1,w0+w1),sorted(x)) for x in chain(*(combinations(terms.iteritems(),s) for s in xrange(1,len(terms)+1))))) or d)({0:str(x)})[0] for x in xrange(1,n))

    print ', '.join(fizzbuzz(100,{3:'fizz',5:'buzz'}))


I know why manhole covers are round and if that's the reason I got the job I don't know if I'd be happy with that.

After all this FizzBuzzing I guess I should start writing one up just so I can say I've done it.


I don't enjoy brainteasers. If an interview question begins, "Four people and a goat need to cross a river...", I cringe.

But when I'm interviewing, I do ask people for a pointer to their open source projects. And if they don't have open source projects, I give them a workstation and ask them to write code. I've seen too many self-proclaimed programmers who claim, "I spent the last 2 years writing Python," who can't sum a list of 10 numbers without using Google. (Hiring can break my heart.)

One way or another, you've got to see code. Real code is best, but it's better to ask people about toy data structures than it is to hire 3 team members who can't write FizzBuzz.


> who can't sum a list of 10 numbers without using Google.

I did a hiring round once for my old team when I worked at Yahoo years ago, where out of 10 or so CV's the recruiters passed me, about 5 people who were given a short list of screening questions by e-mail for them to answer at their own leisure produced answers that were obviously plagiarized from various online sources.

I got suspicious when one guy presented me with two pages of nicely formatted answer complete with section headings that didn't make any sense to a question that should have required a one line answer. He'd copied and pasted the whole thing from an on-line copy of an Oracle manual, so I started doing searches of random sentences from their answers.

Another one managed to cut and paste an answer that was flat out wrong from a forum post answering pretty much the same question, and hadn't bother to read further in the comment thread, where several people corrected the answer he'd used...

It's when they can't even answer simple stuff WITH using Google it gets really depressing, and it happened regularly enough that using screening questions by e-mail was a real time saver in weeding out undesirables, despite the ease with which people could "cheat".


This sounds like what happens when you give students problem sets...


Absolutely. At my last C++ job, I did a lot of interviews and a lot of people claimed to know the STL. I left the low-level compiler questions to my colleagues for the most part, but I would ask a few key questions.

My favourite STL-related question to ask is simple and can be done on a whiteboard in minutes: if you didn't have std::set<class T>, but you had everything else in the STL, how might you implement a set<class T> class?

There's an ideal answer[1], and then there's people who start thinking about low-level implementations. The ones who I don't want are the ones who don't have any answer at all. I'll continue to be interested in anyone who starts talking about the way to solve the problem. What matters to me is not any code that you write (although that's interesting) but your thought process. After the candidate has worked through this, if they haven't gotten the ideal answer I always give it to them: this isn't some national secret, and seeing them process the ideal answer and realize that yes, it really would work is a pleasure.

Sadly, there are cases where it doesn't work. I had one candidate who did very well on the phone screen, but didn't know the STL. I looked up what he claimed to know (Qt) and reframed the question in terms of Qt's classes—and he still couldn't come up with any answer whatsoever.

-a

[1] The ideal answer is precisely such because you're writing much less code and using stuff that people smarter than you have written. It's to use a std::map<class T, bool>.


I've never met a programmer who can't sum 10 numbers without using Google. I have however met plenty who go blank when asked a question they weren't prepared for in front of an intimidating person in a small quiet room during an interview.

Not everyone has performance anxiety, but I've met plenty of programmers who do. Unless you spend the first 15 minutes of your interview calming them down and warming them up with simple technical questions that put them in the mode of programming, you might as well be asking them to sing the star spangled banner naked in front of an audience.


On the other hand, I wouldn't mind coding a program that solved problems of the river-crossing kind, expressing it as a search through a state space with constraints.


To DHH and team: thank you for using reasonableness and sanity in your hiring practices. Wish this was more the norm, but indeed it seems to be the exception.

Trying to evaluate programmatic skills is surely a challenge, but I've found a method that seems to work very well -- simple walkthroughs. I'll do at least two walkthroughs of code, asking a candidate to explain to me what's happening using two different artifacts:

* Some code they've written that solves some problem

* Some code we've written that solved some problem

This has proven to be quite effective in rooting out those who look great on paper and talk a great game, but fall apart when put into action. On the flip side, I've yet to find someone who could explain their code in sufficient detail that then proved incapable once in the job.


Here's the main thing that fucks this all up: there are a very limited set of brainteasers out there.

This means that if someone gets your brainteaser right in an interview, there's, say, a 90% chance that they've heard a question just like it before (or one with a similar "trick"). The false positive rate is too high. Many people know the answers to brainteasers far beyond their ability.

If you were to ask a completely new brainteaser, as difficult as the canonical ones, but requiring a different trick, you would get a very low success rate.

Side note: there's too much criticism on HN about "what does this question have to do with my day-to-day job". This is a bullshit point. The point of a brainteaser is to test problem solving ability, creativity, etc. It may not do a very good job of that, but just because you won't be "reversing sentences in place" in your job doesn't mean that the question is invalid. Think about what the question is trying to test.


The best interview is a pair programming session around some 1-2 hour problem. It takes longer, and it may come later in the interview process, but it's a high bandwidth way of assessing someone.

If they don't have a strong open source background (not everyone does), earlier in the process it helps to ask a couple very simple programming tasks, such as writing a function to reverse a string in place or the infamous fizzbuzz. It will weed out people with great resumes but poor skill.


    The best interview is a pair programming session
Maybe, but I've come to find that pair programming is a certifiable skill that unless you've been doing it for a while needs honing. Pairing with someone great might not seem great if they have no experience with pairing.

So once again, we're back to: there is no single best-way to fully assess a candidate.


Well there is the intense XP understanding of pair programming, and then there is just sitting next to someone while they code version of pair programming. I can't believe anyone hasn't done something in that spectrum.

Even for juniors and new grads, at school people code next to each other all the time.


    sitting next to someone while they code 
Then that's not pair programming. Pairing is an act of give and take, Ying and Yang, Ping and Pong, and sometimes Tweedledee and Tweedledumb. Sitting next to someone looking over their shoulder solving a problem that you know the answer to is not pair programming. There is no act of mutual discovery at all.


Why limit interview questions to problems you already know the answer to? You can have them help you with some problem in your actual codebase you're already working on.


> You can have them help you with some problem in your actual codebase you're already working on.

This is a terrible idea. If they come up with a brilliant solution can you actually use this in production? Who owns the code they write during an interview?


I can't believe anyone hasn't done something in that spectrum

The last time I did anything like that was over 10 years ago. How would that skew your hiring process?


In my case, I may not hire you. Why haven't you worked with another developer in 10 years?


I have worked with many, many developers. I just don't sit down next to them watching them program, or them watching me.

I have pair programmed (over 10 years ago) and I never saw any benefit from it. I work in an environment where every line of code checked into the repository is reviewed and documented in agonizing detail (welcome to the wonderful world of Medical Device software!) and I completely fail to see the benefit of pair programming.


I don't think I've ever worked in an environment where that's been something that happens all the time. Maybe occasionally being with someone while they try to sort out a bug.

But actually creating new code with someone sitting there - that's something that's not something I've seen often


Unit testing is another simple topic that provides insights into how a person thinks about code.

Exercise: give them a class with empty functions, and some unit tests. Ask them to write the class and add some more tests. Also, one of the tests you provide must be subtly buggy.


fwiw we make people who are interviewing do unit tests on our production code. We have enough old gnarly code that we can always find something that doesn't have coverage. It is a great way to see how they code, how they interact with unfamiliar code and how they get on socially with the person doing the interview.


and a great way to improve test coverage for free!


I think this would work well.

I had an interview once that had me complete 3 small tasks on my own: a debugging task, a database task, and a small web site. All three had to be completed in under 2 hours. I wasn't given much direction, other than the specs of the 3 tasks.

While I think this was a moderate evaluation of my skills, it ended up being more a test of if I could complete the tasks on time, vs. the quality of my work: I ended up not making it to the next step of the interview because I was too concerned about quality code over actually completing the tasks (with mediocre code), and didn't complete the tasks, thinking that they would rather get a feel for my quality vs. if I can complete a task. I did it remotely, so they never really got a feel for my character or personality.

Had it been in a paired session, things likely would have gone differently. However, I did learn more about where their priorities were, and ultimately feel I probably would not have been a good fit.


Don't know about the rest of HN but I found it strangely reassuring that even someone as talented as DHH can end up in front of a white board during an interview and feel stupid. I've been there...


There are various types of talents.

He's obviously very talented at design (not necessarily only UI design but general program structure and framework design). But if you needed someone to write packet routing algorithms or design a high-availability database engine from scratch, that may not be his cup of tea.

OTOH, I bet there are developers who love brain teasers and puzzles who may very well excel at that type of work but couldn't implement a huge software system with many moving parts without having it collapse under its weight.


"The only reliable gauge I’ve found for future programmer success is looking at real code they’ve written"

This a thousand times over. Go through their github (or whatever) profile, look at their commits, see how they write tests, etc. It doesn't take a huge amount of time, if you are any good at what you do, you'll quickly be able to filter out a failure (like, he doesn't have code that you can browse).


Every time I read this it frightens me, I don't have an active github account, therefore I am a failure? I'm currently working a 16 month internship before graduating with a Computer Engineering degree. My code from this work will not be available anywhere of course.

Not through any fault of that though, I realize that having a github account with projects and such that you work on is a great help, and I accept that myself not having one yet is a great disadvantage. What do you recommend to get started out on github? Is helping out open source programs still viable? Or now it seems either writing self-run projects, or forking others projects and helping them with those. Is there any clear starting point for githubbing? (and how do you squeeze it in after a 10 hour work day!)

Just questions from someone worried about my empty github profile :)


Don't you write any code outside of work? Even if the main stuff I'm working on wasn't GPL'd, I'd still have my little side projects. They're nothing fancy, but putting them out in public doesn't hurt.


Exactly...the source code for your BYOB you used to learn python say? Or even a fork and pull request to a project to fix even a small bug or add a minor feature.

On the one hand I'd agree for students/juniors it isn't a deal breaker. But students/juniors that do have it are showing a great deal of enthusiasm and passion for programming.


I would recommend to start looking into contributing to open source projects. This is already big enough to notice and become even more mainstream in future. IMHO. But, of course, it is not absolute truth. If you want to land .NET job at some big consultancy firm - they probably will not care about open source work anyway.


Don't worry, the "github or git out" attitude is only common among people you wouldn't want to work with anyway. They're the ones who will change languages or frameworks according to what's fashionable, rat-hole the team on fun but irrelevant projects, and not talk to you unless you dress like a hobo and swear like a sailor. They're over-represented here on HN, but real engineers know that most good code isn't on github and never will be.


My experience with gung-ho githubbers is the same. It's great to try out new technologies, but gitguys take it to an extreme. They'll shoehorn any new technology into their stack without a second thought about product stability.

Gitguy: "I used EC2, Blazboo and Chingbang to create an HA job queue that will never fail! It uses counting Bloom filters and I wrote it in Brainfuck."

Me: "What do you use it for?"

Gitguy: "Sending password reset emails."

These Rube Goldberg wannabes are generally more trouble than they're worth. You end up with a system that's a technological pastiche. The drawback is apparent when you try to hire new teammates. It turns out you can't find someone who knows the 12 esoteric packages your business is running on.


The state of programming is so awful, that I'm literally just looking to know: do you write fat controller actions..or whatever equivalent exists for the specific technology you used.

That, to me, is 100x more important than "how many zeros are in 100 factorial"


I'm glad I'm not the only one who thinks this. I really like the energetic fluorescence of some open source communities, but the punning names, over-extended metaphors and religious devotion to novelty uber alles are really starting to take their toll.


Funny, I thought github was a code repository, not a personality type...

real engineers know that most good code isn't on github and never will be

Irrelevant. Fact is, having code out there makes it easier for prospective employers to get a feel for you, there's no way around that. No one said it had to be the best code or all the code.

Now, if you are working your ass off on proprietary code and write absolutely no code outside of your work, then you won't have it. That's understandable, but doesn't change the fact that it makes it harder for someone to figure out what you can do.


Thanks for the steawman. I was explicitly referring to a mentality, not to github itself. I use github. I don't share that mentality. It's not github's fault that some users treat them as an oracle, but it does happen. I have on multiple occasions heard people say that they won't hire people unless they can check out their work on github - github specifically, and no qualification or allowance for exceptions. Every time, the speaker was exactly the kind of hipster idiot that I just described. I suppose that could just be a coincidence, or it could be that people who follow one stupid trend often follow others as well. In any case, that one stupid trend is sufficisnt to make working with them unappealing, and since there are plenty of sensible employers out there it's unnecessary as well.


Just to let you know, I agree with you in substance. If it makes a difference to someone whether you have your code on github or sourceforge, then they're obviously pushing a dogmatic view. But like you say, that's not github's fault, so conflating them and a subset of their users in an argument doesn't seem productive.


Not all, in fact many, programmers won't have a public github profile. If they do, great, but it can't really be a requirement of your hiring process, unless you want to limit yourself to candidates with public profiles.

Actually that wouldn't be the worst idea in the world.


> The only reliable gauge I’ve found for future programmer success is looking at real code they’ve written, talking through bigger picture issues, and, if all that is swell, trying them out for size.

I agree with the latter. This is how I used to hire programmers, one month on a freelance contract first with a single project. The importance of the code itself is also overrated IMHO. It's just one tiny part of someone you add to your team, what about creativity, taking initiative, never giving up, being a team player, etc. Each of them are equally important to plain coding skills.


This seems like it would work in environments that aren't super competitive for top talent or if you are offering a dream job, but in the face of multiple offers this is going to be far less attractive to the candidate.

I've got a family. I need stability... and health insurance. Unless it's my only offer, a substantially better offer, or a dream job, I'm inclined to go elsewhere.


It's reliable. However, it's not competitive. You're assuming that I have nothing better to do with my time than to work for you on a freelance contract for a month, with no guarantee of a full time position, probably no benefits, etc.

If I'm job hunting, I've got other offers on the table. Ones that do guarantee a salary for more than a month. Ones that include benefits. Ones that expire long before your trial hire would be over.

If you try using this method, I don't know of a top notch programmer that you would be able to attract, simply because the hiring process is too painful.


Exactly , this isn't something that would work either if somebody already has a job unless it's something they can do part time (but even that depends allot on their work hours and other commitments).

You can't really expect someone to quit their existing job just for a chance to work with you.


I imagine David is savvy enough to present it as a mutually beneficial arrangement.

As a prospective employee wouldn't you like to kick the tires before making the leap into a new job that you might find yourself wanting out of six weeks in? There's so many things you just can't know about a company until you're on the inside.

Edit: Remember that 37s has a lot to offer: Remote work with great people, top-tier name recognition in the tech world, and a long history of building and releasing open source tools. That's a really compelling package compared to some of the bigger names in tech.


If I'm uncertain enough to want a trial period, I'd also want to keep my options open. If the position wasn't a good fit, then by the time I find out, I've been forced to turn down my alternatives.

No matter how you look at it, you're asking me to take on significantly more risk with this process.

For developers that are in demand, it's a tough sell.


I'm sure you would like to , but it's not always practical. Even organizations like the military who have fairly rigorous selection procedures do it in a way that can be fitted into a long weekend.


I think the best interview I had was at Mochi Media. What each interviewer did was pick some programming headache that gave them trouble in the past couple weeks and present the problem to me to see how I would solve it. It was probably the hardest interview I ever had but I think it's great way to assess wether a candidate is able to solve problems they see.


I would hope "you just broke the user experience for 50 million people, there are 30 internal teams screaming your name" is higher pressure and stakes than an onsite interview loop. You can make or break your career in that situation too.

I had to deal with a crisis this week that boiled down to a poorly optimized SQL query or two. Of course, the manner in which they failed meant our DB started disk swapping and hurting our availability.

Contract to hire is something we do but it is rare. An initial try out period doesn't do us much good since the learning curve is long.

I don't trust take home assignments unless their solutions can't be googled. And it's a lot of work to create and evaluate problems of that nature. You want them just long enough, not a huge time sink for you or the candidate. You want them to demand a range of solutions so there are many ways to fail and succeed.

I agree we should not be doing riddles/puzzles. For me the most straightforward way to gauge a candidate's ability in an hour is to ask them to program on the spot and see how they act. Yes, it's a game and the candidate can learn to beat the system and oversell themselves. But the vast majority of people I don't want to hire do not go to these lengths.


I don't get the puzzle aversion. Sure, you can stare at someone's code as much as you want to but I don't think you can infer a lot about how their thought processes worked when they were writing it (a proper use of version control helps here but not much). You only see the outcome. You may say - I don't care how they think, I want the job done. Well, that would make sense if they were contracting on an self-contained project. But usually you hire someone to join a team, where how the process works is more important than the actual outcome. Especially whether or not the process is flexible and fits your workflow. So puzzles are good for that, especially that it's not about getting the right answer but approaching the problem. The reason they're abstract, which I agree sometimes crosses a line, is that leaving out the technical details of a problem makes sure that both candidates familliar with the technical details and those that are not have equal chances of doing well.

The problem with this post is that it puts "puzzles", "API quizzes" and "math riddles" into one category. I can see how one can be offended by an API quiz such as "what does X do in Y?", especially if they have a track record in Y or could just look it up on the Web if they didn't know. But math riddles? Puzzles? I'd actually be grateful that I don't have to deal with technical catches and can purely focus on a solution.

One thing to consider is that recruiting, especially at a larger scale, is about minimizing false positives rather than maximizing the collective value. Sure, someone good at puzzles might turn out to be an average engineer but a bad one? I don't think so. Whereas, I can see how someone with a huge pile of code on GitHub can get stuck in a critical situation where if they write unit tests or how they design their APIs is meaningless.


I suppose the type of development 37 signals will do is more concerned with product design and less to do with fast algorithms etc.

I haven't really used their products but in terms of technical sophistication they probably aren't much advanced beyond CRUD type web apps (I may be wrong here).

If you wanted to hire a games engine developer or someone to design algorithms for google then brainteaser problems might be more relevant.


Not sure why I'm downvoted? Am I wrong?


Interview and hiring techniques are a perennial favourite on HN. I just have a couple of comments (disclaimer: I work for Google).

I personally detest abstract puzzles like "why are manhole covers round?" or (worse) "How do you move Mount Fuji?" AFAIK these types of questions are not and have never been used in engineering hiring at Google (sales, etc I know nothing about). WSJ and other sources have made this (AFAIK inaccurate) claim. It's possible individual interviewers do this.

My own philosophy is that a simple whiteboard coding problem is an incredibly useful litmus test. I don't mean anything incredibly complicated either. If you can't write a function that gives me a row of Pascall's triangle, that's a problem. Note: I don't expect you to know what Pascall's Triangle is, merely implement it when explained.

Some of you may think such a test is a waste of time but I assure you it is not. It's astonishing how many people can't do this (and even more astonishing how many that do make it through resume and even phone screens).

The key here is "simple". One problem is that interviewers fall into the trap of thinking these problems are too simple and make the problems increasingly harder. Worse, they may get in a situation of asking someone a problem that is either something they know (because they've covered it before) or they don't (where it's not something you can simply nut out). This is a useless indicator in my experience.

Just because you can create a function to return a row of Pascal's triangle doesn't mean you're a great programmer but if you can't, it almost certainly means you aren't. It's a negative signal filter, nothing more, but an incredibly quick and useful one.

API pop quizzes I'm not a big fan of either, generally speaking, but on the other hand if you've used a language sufficiently long, basic questions about commonly used libraries aren't unreasonable either.

Interviewing is a numbers game. Your goal is to come up with a process that balances time spent by the interviewer with finding a suitable candidate. Note that I didn't say accurately assessing each and every candidate. It's sufficient for the employer to simply find one qualified candidate even if you falsely determine someone who is qualified isn't.

A certain error rate is to be expected. It reaches a point where the time investment to reduce it outweighs the benefit of higher accuracy.

One final note: I detest (both as an interviewer and an interviewee) any kind of "take home" test (other comments have mentioned this).

As an interviewer, it's extra work to assess, people cheat, good candidates won't bother and so on.

As an interviewee, I'm not going to spend hours on you without speaking to someone first to find out how likely it is that you're a good match for me and the likelihood that you believe I'm a good match for you.

I would argue that a take home test is significantly more effort than a simple coding problem that doesn't have a significant increase in accuracy.

The one exception to this is automated testing where the problems themselves are relatively interesting. Some of the Facebook Puzzles fell into this category (and some don't). People may do those anyway for kicks. But again, there's nothing stopping someone going to Github and copying and pasting someone else's solution so what value is the filter, really?

EDIT: clarified my Pascall's Triangle comment. This isn't a trivia problem. I don't expect people to know what it is, merely implement it when explained.


You know what? I just tried implementing the Pascal's Triangle function. I implemented the general algorithm, recursively, but I had a bunch of logic errors that took me about 25 minutes to fix (one was obvious and embarrassing) - in the privacy of my own office and with a debugger.

I have over 10 years of programming experience and I probably would have miserably failed your "litmus test", especially if it were in an interview situation in front of a whiteboard. Maybe this just means I'm a bad programmer, but I tend to think that problems like "Pascal's Triangle" don't represent what we work on day-to-day.


Basic programming like Pascal's triangle represents the easiest stuff we do on a day-to-day basis. We write code. Pascal's triangle is code. Would you rather be tested on your ability to comprehend a multi-kloc codebase and make correctness-preserving modifications to it? Would you rather be tested on your ability to integrate or upgrade third-party libraries in a massive existing codebase? Coding is the easiest thing we do on a day-to-day basis; if it's not representative of what we work on day-to-day, it's because it's easier than what we do, not because it's harder.

I was astonished at the 25 minutes you claimed this took you, so I did it myself. It took me about two minutes to write this, which worked the first time I tried it:

    #!/usr/bin/env python

    def pascal(i):
        if i == 0:
            return [1]
        else:
            L = pascal(i-1)
            ret = [1]
            for i in xrange(len(L)-1):
                ret.append(L[i] + L[i+1])
            ret.append(1)
            return ret
    
    if __name__ == '__main__':
        import sys
        print pascal(int(sys.argv[1]))
What problems did you encounter that required you to use a debugger? I'm genuinely curious, because it seems like such a simple problem.


After 15 years of programming one doesn't really remember what the heck Pascal's triangle looks like. You know why? Because knowing about Pascal's triangle doesn't bring in a paycheck.

If you program for a living, ask yourself in the last 5 years how many times did you have to write Pascal's triangle? How many times did you have to solve a brain teaser at work? I'll answer for me -- 0 times.

So I am advocate of posing a relevant problem and taking the candidate on the whiteboard through that problem. Not "let's imagine you have 100 horses, now one dies, blah blah..." but "Here is what we are working on. We have 1000 concurrent users, there is an open TCP connection to each, help us solve this particular latency problem". Yeah they might not know about Pascal's triangle but they if they can understand and show an ability to solve your particular problem then they are a candidate for you.


After 15 years of programming, one would hope that seeing the definition of it would be enough to let you implement it in short order. Expecting to have it memorized? Of course not. Expecting you to be able to code it up after a good explanation of what it is? I wouldn't expect you to have any serious issues.

I honestly had forgotten the definition of it (the last time I saw it was in high school algebra), but I was able to get a working solution in about 10 minutes, including looking at the definition on Wikipedia. I'd say I'm a rather average coder (I'm certainly surrounded by people far better than I am, in any case).

If you had never seen or heard of Pascal's triangle before, I'd say maybe 20 minutes to code it would be reasonable.


Well, if the Pascal triangle question is not a question of what is a Pascal triangle and the problem is properly explained, then it is easy to come up with a solution.

However, this doesn't highlight good engineers. You know why? Because in the real world finding out the problem is many times a lot harder than coming up with a solution.

Here's a sample:

If I say to you that "I want these and these people classified / sorted, such that I want to find the people that have the best probability of being a good hire" ... I haven't actually told you the problem. I don't really know the problem myself. All I know is maybe that the hiring process is broken for some reason and I'd like to fix it, but I don't know the reasons (I can't put my finger on it).

And that's not a properly defined problem. We don't even know how to measure properly if a hire was good or not. And in the real world, that's how most problems are described to engineers - i.e. I have this pain, solve it somehow.


> However, this doesn't highlight good engineers.

It's not intended to. It's highlighting bad engineers, not good ones. It's a negative filter, not a positive one.

> We don't even know how to measure properly if a hire was good or not.

Your company doesn't do performance reviews for employees?


> Your company doesn't do performance reviews for employees?

Every performance review I ever attended was a bad joke, for the same reasons companies have problems filtering between candidates.

Seriously, if there's a problem with the hiring process (it was just an example btw, I'm not saying there is one at your company), then why do you think performance reviews work out well?


Right. I would hope that implementing something like the Pascal triangle is not the only question you would ask. It indicates if someone can't turn specs into code, but not much more.


> After 15 years of programming one doesn't really remember what the heck Pascal's triangle looks like. You know why? Because knowing about Pascal's triangle doesn't bring in a paycheck.

No, it certainly doesn't. I've never had to write Pascal's triangle, for sure, and I doubt I ever will need to. But on a daily basis I'm solving myriad little problems just like Pascal's triangle. Complicated? Of course not, especially with the libraries my programming environment makes available. But it's the bread and butter of what a software engineer does a daily basis.

If your objection is to make any sense at all, you must be contending that there exist--in the real world--people who can effectively implement solutions for complex problems but who can't code up Pascal's triangle on a whiteboard in a few short minutes. Where are these people? Pascal's triangle isn't a positive filter. It's not like we're hiring anyone who can implement it. It's a negative filter: if you can't do it, why should I believe that you can solve the much more complex problems software engineers regularly face?


Well... unless you did a good job explaining the algorithm for generating Pascal's triangle in simple terms I'd probably fail given the pressure. Otherwise, my initial response would be a visit to Wikipedia where my eyes would promptly glaze over at the text talking about "binomial coefficients" and I'd decide your job advertisement was misleading and it probably isn't a good job anyway.

Assuming you terminated the interview first, I'd go home and come to resent you for asking such a deceptive question. A simple coding test does not invite someone to conceptualize a solution to an unknown problem in terms of rows when the interviewer expects a recursive solution. I'd find solace in noting that the recursive solution is far less efficient than simply building the table manually (pop quiz: calculate exactly how much less efficient in 30 seconds!), and assume that you didn't realize this or you would have asked a more clever brainteaser.

Either way, I'd leave the interview irritated you didn't want to talk about the real problems you needed to solve and upset you didn't look at any of the code I made available before you scheduled our interview. I'd eventually feel grateful things didn't work out, but I'd steer friends away from working with your company, out of the conviction the people interviewing and managing the programming staff don't understand how to judge programmer value, and that it isn't a good place for people who get things done.


>Well... unless you did a good job explaining the algorithm for generating Pascal's triangle in simple terms I'd probably fail given the pressure.

How's this for simple terms? http://en.wikipedia.org/wiki/File:PascalTriangleAnimated2.gi... It's on Wikipedia's page for Pascal's triangle. If your response to that page is to have your eyes 'glaze over' to the point where you can't find the animation (which has a simply-worded caption explaining the triangle), why should someone want to hire you?


I'll tell you why anyone would hire me if you can answer my simple question about the relative efficiencies of the top-down versus recursive solutions. You've already had an hour and claim to be familiar with the problem space, so I think it's a fair measure of your suitability in this space.

I'm going to go for coffee, but there's a whiteboard right there. Shouldn't take more than a minute or two.


Otherwise, my initial response would be a visit to Wikipedia where my eyes would promptly glaze over at the text talking about "binomial coefficients" and I'd decide your job advertisement was misleading and it probably isn't a good job anyway.

You would be right in deciding that. Over the years I've seen all these puzzle and math kids have the least productivity in delivering business software.

And besides if you spend say 30 minutes a day reading these interview puzzles online, you can game the system and get your way through. Without you even being tested on some basic skills required to win in the real world.

You may know all the algorithms from the book by heart. But what's the whole point if you don't how to use a couple of unix text processing utilities? Puzzle kids generally answer that by saying something like sed can be learn't by reading a manual.

If you can learn and work with sed by reading a manual on the job, then you can also learn and work with algorithms by reading manuals on the job too!


> How many times did you have to solve a brain teaser at work?

There is a fine line between brain teasers and the difficult problems that are our bread and butter. To answer your question: I would categorize many particularly subtle problems I've solved at work as brain teasers, so my number is high.

Relevant problems are a great way to judge someone's problem solving skills, but any problem that gives a glimpse into the way a person thinks is sufficient. I am probably not willing to give a candidate the amount of context to describe, let alone time to solve, the exact problem that I happen to be in the middle of on the day of an interview, but rather one we have already solved and determined to be interview-worthy. This means I certainly don't care about the solution, but rather the path to the solution. If you believe, as I do, that people tend to walk similar paths through every problem they solve, and that the path is important and the solution worthless, then the relevance of the problem is irrelevant.


You didn't test how well he can code, what you tested is how easily he can represent mathematical theorems in a programming language.

That is definitely not what 99.999% of the programmers do every day. Professors in colleges, may be. But not programmers.


Bingo. So much of what people are doing these days is throwing together APIs and frameworks, and reading documentation on the same. If the candidate has experience with the technologies you're using, and can show that they've completed some real projects with them, they may actually be of more use than the guy who can perform mental acrobatics with brainteasers.


I got a job once by diving into a multi-kloc codebase and fixing bugs (it was open source and they had a bugtracker) They didn't ask me to do this - I just wanted to impress them. That was a very satisfying way to get a job, I'd be happy to do it again.


I would rather be tested by making changes to the the multi-kloc codebase. It's more similar to what I actually spend my time doing at work. And it's a good chance to see the code I'd be working on.

As for coding Pascal's triangle? It took me <30 seconds to Google it and find code I could copy/paste.


I've had interviews where I've been asked to take a moderately obfuscated code sample, find out and describe to the interviewer what it does, and then find all the bugs in it.

To be honest, seeing about 200 lines that looked like this:

    typedef struct Abc {
        Abc *n;
        Abc *p;
        void *d;
    } Abc;

    Abc *foo(Abc *p, void *x)
    {
        Abc *n;

        n = malloc(sizeof(Abc*));
        n->p = NULL;
        n->n = p;
        n->d = x;
        p->p = n;
        return p;
    }
Note, there are at least 3 (edit: at least 4) bugs in the above code. None should be syntax errors.

Sorting it out was a bit of a challenge. I think the specific question that I was looking at deobfuscated into reading words and definitions in from the command line and maintaining them in a sorted list, then looking up the words and printing their definitions.

And I agree, it was a great interview question. I even had the ability to sit in front of the computer and test my fixes to make sure it worked.


Let me try (I do not often write production C code)

1. You need to malloc 3 words, and you're only allocating one (you want sizeof(Abc)) 2. Abc in the struct will not resolve (change to struct _Abc and the struct name to _Abc) 3. You should cast the result of malloc/check for failure 4. This is bad memory management practice, I think.


1) Right.

2) No, 'struct Foo' is in a different namespace from the typedef'ed name. 'typedef struct Foo Foo' is perfectly valid.

3) Yes, absolutely.

4) Not sure what you mean.

Here's a hint: You'll likely miss one of them unless you realize what the algorithm is trying to do. It's an algorithmic error. The other one is findable from looking at the code without context, but the correct fix isn't entirely obvious without realizing the way the code works.

Pretty good, though :)


Abc is a node in a doubly linked list.

I think function foo is supposed to insert a new node (n) into the list before node p, but it's not done properly (while n points to p and p points to n, n does not point back to the item before p).


I was going for foo as 'dlist_prepend()', not dlist_insert(), but figuring out that it's putting a node into a list is good enough for me. (Edit: to me, 'prepend' implies that there are no nodes before the head of the list you're prepending to)

That should be enough for you to figure the last 2 errors, in any case.


Prepend is probably a better way of expressing "insert before" :P

n->p = p->p

I don't see the last one - unless foo should be returning n?


Yes, it should be returning `n`. And the final one I was thinking of was a segfault on an empty list

     prepend(NULL, &something);
would die if you don't do;

     if (p != NULL)
         p->p = n;

I think, this would be easier in a larger problem where you had context that gave you usage examples. I probably wouldn't ask someone to debug a function floating an a vacuum like I did here. I'd use a trivial but full working program.

I don't think I'd expect everyone to get every bug without a little bit of prodding in the right direction. Being able to step through the code and show what it's doing to the data structures would be a first step that would make me happy. Recognizing that the data structure looked like it's a linked list, and finding and and fixing a couple of the bugs, and I'd be very happy.


Thanks for providing the example, this thread was very entertaining and made me become more interested in learning more about how C works.


Glad you found it interesting.


Function should return n rather than p.


Casting the result of malloc isn't a good idea in C because it can hide the lack of a prototype if you forget to #include <stdlib.h>. It's also a redundant mention of the type name in question, making for more work / noisy diffs under code change.

Unfortunately C++ requires the cast.


I agree that the real test is how well you do on the real codebase. But (as an interviewer) I'm not going to let you see that codebase or spend time describing its architecture to you until I'm convinced you are worthwhile. Hence, walk me through the steps to generate something programmatically; Pascal's triangle or anything else. I don't give a shit that you found code for it on Google, I'm not looking for a solution - I'm looking to understand how well you can solve problems.


I didn't prefix or append 1 to each row, like you did. Instead I added the edges of the previous row as special conditions and this caused a lot of trouble.

Realizing and taking advantage of this property of Pascal's Triangle significantly simplifies the problem. I guess it should be obvious, but I didn't consider using it. I don't think it's because I'm stupid (though I might be, I'm not sure) - I think it's because these kinds of problems are far removed from what I work on day-to-day.


>these kinds of problems are far removed from what I work on day-to-day.

Exactly! The OP says he "clarified" the problem, but I can clarify tons of math problems for you where the solution is staring you right in the face yet its impossible to solve if you haven't seen it previously.

example:

We all know 2,3,5,7,11,... are the primes

So the next guy is 13.

We all know 4,9,16,25,36 are the squares

So the next guy is 49

We all know 1,1,2,3,5 are the fibonacci numbers

So the next guy is 8

..

..

..

..

..

So, what's the next guy in 1,5,7, 15, 20 ?

( Remember the solution is right in front of you, but 90% of people on average won't figure it out. These are called convolution series. The solution is 28, if you haven't figured it out yet. )

(edit: provided solution and fixed a simple numerical error: dekz, thank you for spotting that )


That's a really bad question, and I'm not just saying that because I don't know what the answer is. The first three sequences are very ordinary mathematical series, suggesting that the fourth would be the same. But the sequence 1, 5, 7, 15, 20 doesn't appear in OEIS[1], let alone 1, 5, 7, 15, 20, 28. I feel comfortable in announcing that if a sequence doesn't appear in OEIS, it isn't mathematically significant. Likewise, "convolution series" doesn't appear to have a well-defined mathematical interpretation.

What sequence is this and what is the justification for 28 being the next number?

[1] http://oeis.org/


Apparently the solution is

(1) correct an error in the question which renders it unsolvable (a_3 should be 9, not 7)

(2) guess that the invented term "convolution series" doesn't have anything to do with the mathematical concept of convolution[1]

(3) a_n = nth square number minus nth prime minus nth Fibonacci number.

Staring you right in the face!

[1] https://en.wikipedia.org/wiki/Convolution#Definition


Dude. Relax. I said convolution series and not convolution integral ( the one you've linked to ). The "invented term" as you call it appears in Melzak's textbook ( companion to concrete math ). Finally, I actually "invented" a sequence that appears in OEIS ( https://oeis.org/A160138 ) , so I'm well aware that there are infinite sequences you can manufacture by simply changing the op in a convolution series. I just throw these questions into the mix to have some fun and games, otherwise doing interviews becomes very tedious and boring. Nobody is being judged on whether they can figure out that the number of moves for a legitimate tower of hanoi ie. 1,3,7,15, comes from the 2^n-1 sequence ( there's an entire chapter of D.E.Knuth's "concrete math" devoted to just that very topic, and it was my text in undergrad, so I like to have some fun with it. )


Funny, that's a question my friend asked me the other day because he couldn't figure it out in an interview. Took me half an hour to solve this thing. Would definitely have failed the interview question myself, imagine doing it when someone was watching your every move!?


Forgive the non-10%er but do you mind explaining the solution?

Closest I can come is square minus the other two which gives:

1 5 9 15 20 28...


> Closest I can come is square minus the other two which gives:

Yep, you got it! That is the solution. There is a method to the madness. Given series S squares, F fibonaccis, P primes, with an op of minus, the convolution series

C = S op F op P = S op P op F

gives you 1 5 9 15 20 28... ( ps the third term is 9 and not 7 as stated in the original problem. In my defense, it was 7pm and I was drowsy when I posted that, so I must have looked at my watch, seen the 7 on the dial and then wrote 7 instead of 9. Apologies )


The convolution series question is bad 'cos you don't give enough terms to even see it. The next term is 28 only if the first four numbers are 1,5,7,15 and you assume that what precedes 1 is 0. If you give two or three terms more (28, 48, 75), I'm sure more people will have success figuring it out. The number of terms given is like saying "1 1 2 ?" and saying the missing one is 3 because this is the Fibonacci series.

In any case, even if the candidate didn't figure out the convolution series, but vocally tried calculating differences or tried to look at the numbers in a different base than 10 or even complained that she can cook up an arbitrary rule for the remainder of the series, that kind of thought process is certainly interesting and playful interview material .... if the post calls to a keen eye for patterns. The key is that the answer is not what you're interested in, ever. It's always the thought process.


> We all know 4,9,25,36,49 are the squares

2^2, 3^2, 5^2, 6^2, 7^2

4^2? Did I miss something there?


> I can clarify tons of math problems for you where the solution is staring you right in the face yet its impossible to solve if you haven't seen it previously.

Are you seriously contending that Pascal's triangle is such a problem?

We all know that "Aha!" problems exist, that there are algorithms that are simple in hindsight despite being anything but obvious before we were told the answer. I don't think anyone here is proposing that such questions are good for interviews. But do you really believe that producing rows of Pascal's triangle, when the triangle has already been explained, is actually one of those "Aha!" questions?

If you aren't contending that, I'm not sure what point you're trying to make above.


For my solution, after Jemfinch posted about this topic on G+ - I had to remember what Pascal's Triangle essentially was (it's been awhile since I've reviewed the binomial theorem, etc). No big deal.

Once one understands the actual problem (wikipedia has a rather nice graphical pyramid showing a few rows being generated), it should be a rather simple task to implement a naive version of it in any modern language. Is my ruby version the best performing? Of course not. However, it demonstrates some obvious insights into it such that the first and last members of a row are always 1's.

If I were hiring engineers, I'd be slightly wary if they couldn't at least make basic insights such as that and get darn close within a few minutes. Obviously, there are nerves and other things that can cause issues during an interview process - but one should arrive at a general solution rather quickly, even if it's not the most beautiful.

I don't think anyone is trying to be condescending - but the quick naive approach using an array and calculating some sort of offset to get your values is simply bread and butter in our profession.


   def pascal(i):
        o=[1]
        for j in xrange(1,i):
           o.append(o[-1] * (i - j) / j)
        return o
Recursion's nice, but this isn't really the place for it. Especially since python has a maximum stack depth.


That is a very clever solution that appears to rely on some mathematical property of Pascal's triangle that is not at all obvious from its definition. You've basically derived a formula for deriving binomial(n, k) given binomial(n, k-1).


While no interviewer would reasonably expect anyone to come up with that as a first solution (well, unless it was a very mathy job, and even there they'd give you some leeway), it would be reasonable to have a conversation about the structure that eventually lead there.

Some of the best interviewers I've seen approach interviewing almost like teaching, in that they'll ask you questions that lead you towards the answer, just to keep the conversation flowing so they can see how (and whether) you think.

For instance, in this case, after providing a correct recursive implementation you'd hopefully add the caveat that it might not be the best way to do it because of maximum stack depth, and you could talk about that a bit, as well as workarounds. You'd likely be able to easily offer an improved version that iteratively updated a (properly sized) row in-place, [1,0,0,...,0] -> [1,1,0,...,0] -> [1,2,1,...,0], etc., until the last digit became 1, in which case you're done.

Then you'd have a conversation about the time and space complexity of that method, you'd hopefully be able to figure out that it's O(N^2) in time and O(N) in space, and the interviewer would ask you whether you could do better. You'd tell him that it's optimal re: space, but that there could possibly be an implementation in O(N) time, and then you'd think for a moment.

From there, the interviewer would give you a gentle prod (after all, this isn't a math interview and we don't have all day) by suggesting you look at the ratio between neighboring elements, and then it's easy to, say, look at the eighth row and see that the ratios are 8/1, 7/2, 6/3, 5/4, 4/5, 3/6, 2/7, 1/8 and figure out dspeyer's formula from there.

The nice thing about interviews in this format are that they don't require any specific prior knowledge; in fact, they work much better when the candidate has never seen the problem before, because you actually get to talk through it with them for the first time.

I realize that algorithms aren't everything, but at a place like Google, at least, they're a lot more important than at a typical Java-shop, so I think questions like this are perfectly fair. If someone can't make it through an interview like that, they're going to have a lot more trouble on the job when they need to cast a complicated machine learning algorithm that's just been invented by the guy down the hall into map-reduce form and get it running well enough so that it can run in real time and actually be tested...


My comment wasn't complaining about interviews or these questions (I'm at Google now and was at Amazon before, so I've made it through these loops before and I've got no bone to pick).

I was meaning to comment on how dspeyer was portraying his solution as a simple iterative alternative to jemfinch's recursive one, when in fact it was an entirely different algorithm that calculated the results in a different way. I thought it was strange that he did not mention this.


Anyone can write it in Python. I think fragsworth did his in J: (-@|. |."_1 [: ":@-.&0"1 !~/~)@i. 5


I'm just surprised nobody has mentioned calculating the solution directly using factorials or some builtin n choose k function.


Or the even-less-computationally-intensive algorithm: http://en.wikipedia.org/wiki/Pascals_triangle#Calculating_an...


Thats what dspeyer did here: http://news.ycombinator.com/item?id=3431687 .

For an algorithms heavy job, I would hope a candidate would be able to derive this equation with a little guidance (ie, reminding them about n choose k and how to write that with factorials)


Very nice but expecting the candidate to know that much about Pascals triangle upfront seems a bit too much ;)


Factorials risk integer overflow in the intermediate calculations, though something similar can work.


Yes, it seems a number of optimizations are possible. At least it is not necessary to calculate n!. Depending on the job discussing these strategies and whether they are worthwhile for some real world problem can be quite interesting.


True, but could just anyone write it as offensively as the below Python one-liner?

(Were one to ever write such code in production, he should be escorted from the building immediately!)

    from itertools import tee, izip, chain
    pascal = lambda rows: reduce(lambda acc,n: acc+[(lambda it: [sum(x) for x in (lambda it,n=2: izip(*((lambda it,n: ([next(it) for _ in xrange(n)],it)[-1])(it,pos) for pos,it in enumerate(tee(it,n)))))(chain((0,),it,(0,)))])(acc[-1])], xrange(rows), [[1]])

    num_rows = 10
    print '\n'.join( ' '.join(map(str,row)) for row in pascal(num_rows) )


PHP Version I came up with. Took more than 10 minutes tho... for ($i=0; $i<=10; $i++) { if ($i== 0) $row[$i] = array(0, 1, 0); else { $row[$i] = array(); array_push($row[$i], 0); for($j = 0; $j<count($row[$i-1]); $j++) { $someval = $row[$i-1][$j]+$row[$i-1][$j+1]; array_push($row[$i], $someval); } } for($k = 1; $k<count($row[$i])-1; $k++) { echo $row[$i][$k]; } echo "<br />"; }


I tried to recode what you did in scheme for practice

    (define (pascal n)
      (if (= n 0) (list 1)
       (begin
       (let ((L (pascal (- n 1)))
           (ret (list 1))) 
       (do ((i 0 (+ i 1)))
           ((= i (- (length L) 1)))
           (begin (set! ret (append ret (list (+ (list-ref L    i) (list-ref L (+ i 1))))))))
       (append ret (list 1))
        ))))


This is very, very, very bad scheme. set! and list-ref are no-no.

This is better scheme:

(define (pascal n) (if (= n 0) (list 1) (let ((L (pascal (- n 1)))) (append (list 1) (map + (cdr L) L) (list 1)))))


Thanks for the tip, but when I tried to run it under mzscheme I got the error: map: all lists must have same size; arguments were: #<procedure:+> () (1). Also could you explain why set! and list-ref are bad?


You are doing yourself a great service by learning Scheme - that can fundamentally change how you think about programming. Reading the first chapter of SICP[0] will change you forever. Yet right now you're using Scheme as a Python/C/Ruby with a strange syntax, while it's a totally different language with its own idioms. You should learn them to master Scheme.

As for your particular questions:

1. map yields an error under mzscheme

I'm at work, and didn't have access to a proper scheme, so I used an online REPL[1], which apparently is more forgiving. Regardless, you can write your own map (a good exercise!) that stops as soon as one of the arguments is exhausted. Make it tail-recursive[2] as well!

2. list-ref is bad

Lists are beautiful data structures designed for recursive algorithms. If you are using lists and not using recursion (or a hidden recursion in the form of map), you're doing something wrong. list-ref uses list as a vector, which has a performance implications - your algorithm is O(n^3) while mine is O(n^2).

3. set! is bad

Margins are too small for a proper explanation :), but basically set! has side-effects[3], and functional programming should avoid having them.

Have fun learning Scheme!

[0] http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-9.html#... [1] http://www.bluishcoder.co.nz/jsscheme/ [2] http://en.wikipedia.org/wiki/Tail_call [3] http://en.wikipedia.org/wiki/Side_effect_(computer_science)


Just out of curiosity did you learn scheme primarily through books? Do you actually use the language on a daily basis? Maybe you are a lisp hacker?


Using (map + (cdr L) L) doesn't work since they're different sizes (with DrRacket that I used, I see that it worked for you in your other comment, it is a very nice solution!). I'm not sure of an easy constant time way to append a zero to the end of (cdr L). The way I did it is similar to how you did it without map:

(define next (lambda (l) (if (null? (cdr l)) '(1) (cons (+ (car l) (cadr l)) (next (cdr l))))))

(define pascal (lambda (n) (cond ((< n 1) '()) ((= n 1) '(1)) (else (cons 1 (next (pascal (- n 1))))))))


Ah, (map + (cdr L) L). That's excellent!


Here's my version in C:

    int main(int argc, char ** argv) 
    { 
    	long *n1,*n2;
    	int row,i;

    	row=0; N1[0]=1L; print_row(row,N1);

    	while(row++ < MAX_ROWS)
    	{
    		for(i=0, n1 = N1, n2=N2; i<MAX_BUF && *n1; i++)
    		{
    			if (i==0 )        n2[i] = 0       + n1[i];
    			else if(n1[i]==0) n2[i] = n1[i-1] + 0;
    			else              n2[i] = n1[i-1] + n1[i];
    		}
    		print_row(row,N2);
    		for(n1=N1,n2=N2; *n2; ) *n1++ = *n2++;
    	}
    }

    void print_row(int row,long*n)
    {
        printf("row #%d",row);
        while(*n) { printf(" %ld ", *n++);}
        printf("\n");
    }
This works through the size of long int (at least 50 rows).


I think some of your code is missing (like where N1 and MAX_ROWS are defined).

Here's mine:

  #include <stdio.h>
  #include <stdlib.h>

  long *pascal(int row) {
    long *ret = (row == 1 ? NULL : pascal(row - 1));
    ret = realloc(ret, sizeof(long) * row);
    ret[row - 1] = 1;
    for (int i = row - 2; i > 0; i--)
      ret[i] += ret[i - 1];
    return ret;
  }

  int main(int argc, char *argv[]) {
    int row = atoi(argv[1]);
    long *data = pascal(row);
    for (int i = 0; i < row; i++)
      printf("%ld ", data[i]);
    printf("\n");
    free(data);
    return 0;
  }


Hi,

I ran your code and it does run well. However fails for large cases such as pascal(200000).

You mentioned you worked for Google and Amazon, how can you modify the code to get an answer for pascal(200000)


Yeah, for large cases the long will overflow (also it will do a lot of realloc()). Your two main options for this are:

  1. use double to get approximately-correct answers
  2. use an abitrary-precision library like GMP


What if I wanted an exact precision. Could I use a custom data type to get better precision? If I used a custom data type, how could I make the code work for a distributed system to get parallel behavior?


upvoted for realloc trick


This inspired me to write one that is even shorter and doesn't heap-allocate any memory at all. What can I say, I'm a sucker for minimal C.

  #include <stdio.h>
  #include <stdlib.h>

  int main(int argc, char *argv[]) {
    int row = atoi(argv[1]);
    long data[row];
    for (int i = 0; i < row; i++) {
      data[i] = 1;
      for (int j = i - 1; j > 0; j--) data[j] += data[j - 1];
    }
    for (int i = 0; i < row; i++) printf("%ld ", data[i]);
    printf("\n");
    return 0;
  }


When did C start accepting non-constants for array size declarations?


Variable-length arrays were added in C99.


I was wondering the same thing. But I can confirm it compiles and runs, though I had to supply a -std=c99 flag to gcc.


Ahhh, useful tip. I've just been using alloca instead...


Best solution I've seen here so far.


If row is too big you can bust your stack frame when you define your data array.


True, though in this example integer overflow would happen first.


I think your code is off by 1.


If someone had some experience with pascal triangle, they could come up with a very efficient solution, that might put them at a advantage. If the interviewer valued efficient solutions.

So far dspeyer, has one of the most efficient algorithms.

  def pascal(i):
        o=[1]
        for j in xrange(1,i):
           o.append(o[-1] * (i - j) / j)
        return o
in ruby this could be

  def pascal2(i)
    o=[1]
    i += 1
    for j in 1.upto(i - 1 )
       o << o[-1] * (i - j)/j
    end
    o
   end
we could make it even more efficient if we remember a there is symmetry in a pascal triangle. So that we don't have to calculate all the values in a row, maybe just half of the values. We have to calculate the row a little different if it is even or odd in number.

  def pascal(i)
   o=[1]
   i += 1
   if i == 1 then
    return o 
   end
   i.even? ? d = i / 2  - 1 : d = i / 2 
   for j in 1.upto(d )
            o << o[-1] * (i - j)/j
   end
   o[0..i/2] + o[0..i/2 - 1].reverse
   end
So far it up to 2x as fast. I wrote this method in Ruby to time the different methods, pascal, pascal2.

  def timeA(m,n)
    begin_time = Time.now
    send(m,n)
    end_time = Time.now
    p "Time elapased #{(end_time - begin_time)*1000}    milliseconds"
    end
these are the times I get

  ruby-1.9.2-p290 :037 > timeA('pascal2',40000)
  "Time elapased 949.9490000000001 milliseconds"
   => "Time elapased 949.9490000000001 milliseconds" 
  ruby-1.9.2-p290 :038 > timeA('pascal',40000)
  "Time elapased 470.48699999999997 milliseconds"
   => "Time elapased 470.48699999999997 milliseconds" 
  ruby-1.9.2-p290 :039 > timeA('pascal',80000)
  "Time elapased 1776.224 milliseconds"
   => "Time elapased 1776.224 milliseconds" 
  ruby-1.9.2-p290 :040 > timeA('pascal2',80000)
  "Time elapased 5722.419000000001 milliseconds"
   => "Time elapased 5722.419000000001 milliseconds" 
  ruby-1.9.2-p290 :041 > timeA('pascal',200000)
  "Time elapased 160416.34199999998 milliseconds"
   => "Time elapased 160416.34199999998 milliseconds" 
  ruby-1.9.2-p290 :042 > timeA('pascal2',200000)
  "Time elapased 235113.38 milliseconds"
  => "Time elapased 235113.38 milliseconds"


I agree that the code should be simple enough to write in one go.

But that is under the (large) assumption that a suitable recursive algorithm has been found. The programming or coding is not the issue here. It's understanding the problem well enough and then identifying a recursive strategy.


five-minute coffee, am I hired?

    getRow = (n) ->
        rows = [[1]]
        for i in [1...n]
            prev = rows[i-1]
            rows[i] = []
            for j in [0..i]
                rows[i][j] = (prev[j-1] or 0) + (prev[j] or 0)
        rows[n-1]
and a one-liner for those who appreciate:

    (n) -> [[1]..n].reduce (p, c, i) -> ~~p[j-1] + ~~p[j] for j in [0..i]


Two minutes?

That's about 9 seconds per line.

Unless you are typing a program that you memorized - you cannot get such speed.


Your code sucks dude ;) Actually I don't know python so I don't know what's going on. Here is a real man's program in c++, it didn't take long to write but it did take long to figure out how to calculate it correctly.

  #include <iostream>
  using namespace std;

  int main()
  {

    	  int max;
  	  cout << "How big do you like it?\n";
	  cin >> max;
	  if(max<0||max>30)
		  return 0;

    	  for(int row=0; row<=max; row++)
	  {
		  cout << 1 << " ";

		  unsigned int out=1;
		  unsigned int last;
		  unsigned int next=row;
		  for(int i=1; i<=row; i++)
		  {
			  last=out;
			  out=last*next/i;
			  next--;

			  cout << out << " ";
		  }

		  cout << endl;
	  }
	  return 0;
  }
EDIT: there we go :D small errors corrected

For those that are just brainfucked with wikipedia's explanation 1. Wikipedia sucks as explaining math. Never use it to look something up that deals with math. 2. The best explanation is like in the middle of the article, "Calculating an individual row or diagonal by itself"... "For example, to calculate row 5, r = 6. The first value is 1. The next value is 1 × 5/1 = 5. The numerator decreases by one, and the denominator increases by one with each step. So 5 × 4/2 = 10. Then 10 × 3/3 = 10. Then 10 × 2/4 = 5. Then 5 × 1/5 = 1."


u mad bro, that u had to write so many lines, and yet your code couldn't print out pascal(5000)

u mad?

in ruby,

  def fact(n)
    n == 0 ? 1 : n.downto(1).inject(:*)
    end

  def pascal(n)
   0.upto(n) {|r| print "#{fact(n)/(fact(r)*fact(n - r))} "}
  end


I did the same experiment. Got a sheet of paper and figured out the concept behind this problem within a minute but i took me quite some time to put it into code. Wrong logic / not perfectly concentrated / choosing the wrong way for a moment etc.

Am i a bad programmer now if i cant get a running program in less than 5 minutes ?

I would rather let people apply with code they are proud of like in every design job than solving litmus tests/puzzles


'but I tend to think that problems like "Pascal's Triangle" don't represent what we work on day-to-day'

There are plenty of software jobs where that statement is absolutely true. I am not interviewing people for those jobs.

Taking a well-defined problem such as "write code to print the Nth row of pascal's triangle" is substantially easier than most of what we deal with on a day to day basis. The hard part is discovering and defining the problem well enough to get the point where it is as easy as "here is what we need to do, write some code to do it". Once you get there, whipping out the code needs to be second nature.

As cletus said in his post - a problem like this is a negative filter. If someone has a hard time whipping out the code for a well-defined problem like that, they will have a very hard time here.

If they can't, it doesn't mean they are a bad programmer. It doesn't mean they can't be successful in the industry.

It is simply a strong signal that they would not be successful at Google. And since I'm interviewing people for jobs at Google that's what I care about :-)


It's probably more likely that it means you didn't graduate in CS within the past couplefew years, which I suspect is really the purpose of these filters.

Would you rather be tested on your ability to comprehend a multi-kloc codebase and make correctness-preserving modifications to it?

That depends on how rigorous and well-maintained the existing test suite is, or even worse, that it is only half done. Which half? Guess, I think you'll be surprised.


I think that's more cynical than is appropriate. You're assuming that every successful candidate is regurgitating this from memory.

Solving Pascal's Triangle is a straightforward yet not-completely-trivial logic exercise. It's the kind of algorithm design task that "serious programmers"[1] need to do, successfully, every day. Goofed logic bugs in algorithms are Very Expensive bugs. People who can avoid them are very desirable. This particular problem can be solved in just a few lines of code with no edge cases, which makes it a great candidate for a test to detect those developers.

[1] And yes, of course one can be a "successful software engineer" without being a "serious programmer". Writing CRUD logic and the like may be its own skill (and one not helped by algorithmic reasoning), but CRUD developers are cheap and plentiful. Tests like this give you a marker for the people who are hard to find.


You're assuming that every successful candidate is regurgitating this from memory.

By the same token, you appear to be assuming that successful candidates aren't. But I'm not trying to polarize.

Wouldn't it be better to invent a scenario for which a solution/algo must be constructed, during which the interviewer and interviewee can bounce their relative expertises? Is it too much to ask for the questions not to be rote(-ish) Pascal's Triangle stuff?


Not to get meta on you, but you're assuming that I'm assuming. I assure you I'm not assuming anything. I tried it myself, successfully, having never done it before, and it's not that hard to solve. Someone else posted their solution here. Good programmers can come up with that kind of code routinely, mediocre programmers can't. And this is a test to distinguish the classes.

Talking about software design methodology cannot tell me whether someone can write new algorithms correctly. If I have a job that requires people to correctly design algorithms, I think asking questions about algorithm design is entirely appropriate.


I'm not talking about methodology, I'm talking about the interviewer coming up with a problem that would require an algorithm to be designed, whose task it is the interviewees to solve. However, if the interviewer themselves doesn't know algorithms enough to come up with a decent problem....


"rote"?

Are you seriously suggesting that anyone has code for a line of pascal's triangle memorized? Why would they? Just in case they got asked for it in an interview?

This is one of the advantages of this over something like quicksort: it's obscure enough that the candidate won't be able to recite it.


Coding takes time. It's more cost-effective to ask the interviewee code a non-trivial task at home and review his results before inviting him in. Then you already know if he can code and you can walk through his thinking on a whiteboard, with a more specialized case.


Except that there's a difference between what people can produce and what they can google up. When you let people out of the room to do something, you have no way of knowing if they did it themselves. Or if it took them 12h to do it.


When I was giving interviews, I'd have the candidate bring up the code sample that they submitted on their own laptop, and start making changes. If I found any bugs, I'd ask them to fix them; otherwise, I'd extend the problem in some way that their existing code didn't anticipate and ask them to change their code to meet the new spec.

Not only did I find this very helpful in evaluating prospective employees, several people that we hired told me later that it was the most fun they'd ever had in an interview. That is, of course, a biased sample; some of the people that I interviewed obviously did not find my interview style entertaining.

The litmus test that I generally used was whether they were able to, given an erroneous output of their own creation, they could reason about why it happened and fix it before the hour-long interview slot was over. Also, if I (watching over their shoulder) couldn't spot the cause of the bug, I considered that it was probably unreasonable to expect them to be able to find and correct it during the interview.

While I encountered a lot of candidates that froze in the face of compiler errors or started making changes without stopping to think about what the cause of the error might be, I never once got the feeling that anyone was less familiar with the code that they submitted than someone who had written it themselves.


I don't want to put words in the OPs mouth, but in my experience knowing that you should use a recursive solution probably puts you ahead of the curve. Being able to produce a solution which has the correct general form possibly after a little prompting would count as a reasonable pass.


I think it depends on what you're hiring for.

For 90% of engineering tasks, you're just plugging together different technologies other people put together. Oh hey yeah, Unix + HTTP + Rails + Javascript + Some Dude's API.

The amount of SHEER INTELLIGENCE necessary to untangle the above is, I think, not very high. It's a low threshold of competence. So, for that range of problems, fumbling around for fifteen minutes in front of a whiteboard will display the necessary grasp of logic.

The hard part in interviewing (and what people usually screw up), in my opinion, is almost entirely finding someone with a can do attitude who you can get along with for 40hrs every week.

Almost anyone can string stuff together; finding people that you can trust is way harder.


I think this response generally mischaracterizes the whole technique of whiteboard testing. If you got the general algorithm implemented recursively quickly, you're done. A bunch of logic errors? Oh well. The interviewer may point them out, at which point you can discuss them.

You do not have to create perfect code on the whiteboard. You just need to demonstrate that you can code. Sounds like you would have been fine.


Maybe this just means I'm a bad programmer, but I tend to think that problems like "Pascal's Triangle" don't represent what we work on day-to-day.

You are right! In fact I seriously doubt any serious software project that is solving business problems ever works on these sort of problems.

These problems are basically more of testing mathematical knowledge through programming.

They may make a great interview to hire professors but not programmers.


I don't know what you code, but what I code, which classic web apps, requires very often the kind of thinking that solving Pascal triangle requires. I often need to choose how to model my data, how to have it glued together in different forms, etc. I'd say a simple array display with border is similar, even.


Here's a 5 - 10 minute iterative version in C.

https://gist.github.com/1567842

I tend to like to rough code out Lispishly, but if I'm actually running stuff I'll often do it in C.


In my experience, interviewers who give whiteboard problems are not expecting perfect answers or even syntactically correct answers, although if the interviewer sees a bug (e.g., missing some corner case), you may be prompted to look for it. The goal is to have a basis for follow-up questions like “how would you change it if the input was type Y instead of type X?” or “what’s the time/space complexity of that algorithm?” or “how would you write a test for that function?”


You only fail the 'litmus test' if the interviewer were looking for you to write working code. Whiteboard coding, done right, isn't about writing bug free code, it's about observing the thought process that goes into solving programming problems.


I always write working code (at least after debugging), I have a hard time understanding why people cant see the logical fallacy here. Have a programmer write code that may or may not work to test their ability as a developer. When their ability as a developer is measured by being able to write working code. I personally would rather see working code and an explanation of what they have done. I can garner someones ability to write code from that.


The problem is: I don't do thinking by writing code. I do thinking by thinking. Often away from the computer (and even farther away from whiteboards).


What is it you've been doing for 10 years? It does sound like your basic coding skills are pretty atrophied.


> If you can't write a function that gives me a row of Pascall's triangle, that's a problem.

I've developed high-performance medical imaging applications. I've build large distributed financial systems. I've built 2D/3D games. I've done a ton of stuff. Hell, I'm a software engineering hiring manager at my company. Yet I have no idea what a Pascall's triangle is, or how to give you a row of them.

I consider questions like that to be as useless as brainteasers in determining whether or not to hire someone. Sure, it can help give clues about a candidate, but if you're seriously considering rejecting a candidate on that point alone, I think you've got a problem in your hiring process.


I think you're misunderstanding. This is not a test to see if you know what Pascals triangle is....it's a simple test of your ability to take a simple algorithm that has been explained to you and translate it into code. This is the essence of programming.

Pascal's triangle is simple a triangle which starts at 1 and where each number in the next row is the sum of the two numbers above it, with 0 being assumed for numbers on the edge.

A program to produce one is nothing more than a couple for-loops, printf, and some basic logic.


Yes, in retrospect, I think I was misunderstanding. OP clarified his comment, and in that case, I think it makes sense. I agree - that is the essence of programming, and if you truly cannot do that, you have no business being a programmer. However, there is more to software engineering than being able to translate algorithms into code, and I'm not so sure that a single litmus test should make the difference between hiring/not hiring a candidate.


There's more to playing baseball than running, but someone who can't run probably won't make a very good baseball player. Sure, there's going to be some error rate, but that's inevitable.


Yes, true, but curlers or rock climbers don't need to run the same way baseball players do.

All I'm saying is that there needs to be a multi-faceted look at a candidate. Some candidates are going to be so obviously poor that they do merit instant rejection. However, in my experience, that is the exception rather than the rule.


Out of curiosity: where do you come in, in the interviewing process? I don't have direct experience, but I frequently hear that very trivial programming questions weed out more than 50% of applicants to programming jobs.


I am in the middle; our candidates are screened by HR (a process I'm not super fond of), then they come to me for various levels of tech/cultural fit interviewing. Then my comments are forwarded for final approval. Generally, if I say "hire", they hire. If I say "run", they run.

And yes - trivial programming questions do weed out a fair number of candidates. However... it's tricky. A trivial programming question can look different to many different people, especially if they have varying skill levels and depending on their level of comfort in interviews.

I try to give people the benefit of the doubt. If one question stumps them, I'll try to gauge their understanding with another similar (but different enough) question. If that doesn't work... sometimes I'll give them another shot, but usually I'll start winding the interview down.


Sadly, for baseball, this is far from the case :(


Algorithms are one thing, but I don't know of a single business that is dependent on Pascal's Triangle. That is, is it realistic to use PT?


You're focusing too much on the wrong thing. The point is being able to take a (simple) problem and convert it to code. The problem doesn't matter, the method does.


I have no idea what Pascall's triangle is. But I'm able to write bookkeeping/accounting software and webframeworks. So I guess we'll have to look for jobs elsewhere :)


I had no idea either but he qualified his statement saying he would explain what it was. I just looked up the wikipedia entry which explained it pretty well and I would think that you'd have no problem building a prototype given your listed experience. The point he is making is that he will give you a problem (and Pascal's triangle seems to be a fairly simple one at that) to then be involved in your analytical process. It sounds fair to me.


I've been putting food on the table writing code every day for almost two decades along similar lines as king_magic and I also do not know what a Pascall's triangle is.


10 years working in embedded industry, no idea what Pascall's triangle is.

I would ask you to explain, you would smirk inside, I will be rejected.

I am sure google should have enough data by now to see if there is correlation between knowing this problem and performance of hired programmer. I will be suprized if it has any relation.


The interview question isn't supposed to test your knowledge of Pascal's Triangle - it's designed to test your ability to create it programatically.

I'm sure cletus or any other interviewer would explain what Pascal's Triangle is to you, and provide a couple rows of example output if you asked.

EDIT: Find/replace "Pascal's Triangle" with "FizzBuzz" and reread the comment. You'll get the same insight without getting hung up on seeing a term you didn't know.


Very much this, my company has been taking a deeper look at its hiring processes lately, and I know there are explicitly comments on our small coding problems that say things like

"Candidates might get hung up on the gotcha part of it: the actual snip calculation. That's not something we care about, but that the candidates will likely think we care about. We should consider just giving them the formula (or letting them use a pre-defined "snip" method)."


What is a FizzBuz and what does it have to do with day to day Software Engineering?


What switz said.

It's useful as a very early screening because it eliminates a lot of people who are wasting your time and is so simple as to have almost to false negatives. It has no complexity, no algorithm or concepts you might be unfamiliar with- just the for loop, the conditional, and the modulo operator. No one who claims to be a professional developer should fail it.

For reference: http://www.codinghorror.com/blog/2007/02/why-cant-programmer...


FizzBuzz:

Write a program that prints the numbers 1 to 100. However, for multiples of 3 print "Fizz" and for multiples of 5 print "Buzz". For multiples of both 3 and 5, print "FizzBuzz".


I've gotten that one a few times, and I always write something that only contains the string 'Fizz' once and 'Buzz' once (i.e., doesn't say 'FizzBuzz' separately, just uses the same logic to print 'Fizz' and 'Buzz' both), because I figured that was partly the point; otherwise it would be "print 'Zarples' for both multiples". I've been told that it's the first time the interviewer saw it done that way, and now I see several replies here that also would take no modification to use 'Zarples' instead of 'FizzBuzz'.

No point to this, I guess, other than that even something as simplistic as this is open to interpretation.


Couldn't resist using my new Haskell skills.

show' :: Int -> String show' x

|((rem x 3 == 0) && (rem x 5 == 0)) = "FizzBuzz"

|rem x 3 == 0 = "Fizz"

|rem x 5 == 0 = "Buzz"

|otherwise = show x

main :: IO ()

main = putStrLn $ foldl (\acc x -> acc ++"\n" ++ show' x) "" [1..100]


A couple of suggestions: functions like rem look better infix:

     x `rem` 3
this means the same thing but is more readable (I think).

Instead of folding, just map putStrLn over the list:

    mapM_ (putStrLn . show') [1..100]
mapM_ is just a version of map for monads that ignores its result (it returns m () instead of m a). You can also use forM_ which is mapM_ with its arguments reversed.

Finally, I think this site lets you format code by putting four spaces in front of each line. I hope :)

Edit: One other thing. You don't need all those parentheses in the first line.

    rem x 3 == 0 && rem x 5 == 0 = "FizzBuzz"
    x `rem` 3 == 0 && x `rem` 5 == 0 = "FizzBuzz"
Both versions should work and I think they are easier to read. I could see (rem x 3 == 0) also being clear, but the outermost parentheses do not help at all.


Here's a shorter version:

  fizzbuzz x | x `mod` 3 == 0 && x `mod` 5 == 0 = "FizzBuzz"
             | x `mod` 3 == 0 = "Fizz"
             | x `mod` 5 == 0 = "Buzz"
             | otherwise = show x

  main = putStr $ unlines $ map fizzbuzz [1..100]


For us old-school types, here's the 60-second version in C:

    int main(int argc, char ** argv) 
    { 
    	int i;
    	for (i=0; i< 100; i++)
    	{
    		if( (i%3==0)&&(i%5==0)) printf("FizzBuzz");
    		else if(i%3==0)         printf ("Fizz");
    		else if(i%5==0)         printf ("Buzz");
    		else                    printf("%d",i);
    		printf("\n");
    	}
    }


A trivial piece of code that a candidate should be able to write as quickly as they can type on a computer or write on the board designed to test that (1) they can write basic code and (2) they've written enough code that trivial problems are trivial. Jeff Atwood coined the phrase [1] and gave the classic example:

   Write a program that prints the numbers from 1 to 100. But for multiples 
   of three print "Fizz" instead of the number and for the multiples of five
   print "Buzz". For numbers which are multiples of both three and five print
   "FizzBuzz".
[1] http://www.codinghorror.com/blog/2007/02/why-cant-programmer...


Can someone tell me a lispier way of doing this than

  (dotimes (n 100)
	   (cond ((= 0 n))
		 ((and (= 0 (mod n 3))(= 0 (mod n 5)))
		  (format t "~a: Fizz Buzz!~%" n))
		 ((= 0 (mod n 3))
		  (format t "~a: Fizz!~%" n))
		 ((= 0 (mod n 5))
		  (format t "~a: Buzz!~%" n))
		 (t (format t "~a~%" n))))


Surely, the lispiest way is to write an interpreter! Write a function fizzbuzz-eval so that the following is a solution (left as an exercise). ;)

    (fizzbuzz-eval
     100
     '(((3 5) (print "FizzBuzz"))
       ((3)   (print "Fizz"))
       ((5)   (print "Buzz"))
       (()    print-number)))


Here's a macro. There's some issues :) The biggest one is you have to make sure to put the multiple divisors list first.

The call is

  (fizzbuzz (100 (((3 5) "fizzbuzz")(3 "fizz")(5 "buzz"))))

  (defun make-cond (item)
  (let ((divisors (first item))
	(exclamation (second item)))
    (if (listp divisors)
	(let ((div1 (first divisors))
	      (div2 (second divisors)))
	  `((and (= 0 (mod n ,div1)) (= 0 (mod n ,div2)))
	    (format t "~a: ~a~%" n ,exclamation)))
      `((= 0 (mod n ,divisors)) ;else
	(format t "~a: ~a~%" n ,exclamation)))))

  (defmacro fizzbuzz (params-list)
  (destructuring-bind (num-to req-list) params-list
    (let ((cond-list '(((= 0 n)))))
      (loop for item in req-list do
	    (push (make-cond item) cond-list))
      `(dotimes (n ,num-to)
	 (cond ,@(reverse
		  (push '(t (format t "~a~%" n))
			cond-list)))))))


Hehe. That's quite a monster. I'll try to be a bit more helpful.

1. Use loop instead of dotimes so you can make the index range over 1-100 instead of 0-99.

2. Factor out the string calculation so you only need one print statement.

3. Maybe use zerop.

4. Don't write ")(". Use a space in between.

Example:

    (loop for n from 1 to 100
          do (format t "~a~%"
                     (cond
                      ((and (zerop (mod n 3))
                            (zerop (mod n 5)))
                       "FizzBuzz")
                      ((zerop (mod n 3)) "Fizz")
                      ((zerop (mod n 5)) "Buzz")
                      (t (format nil "~a" n)))))


Thanks!

With the macro you can use arbitrary divisors though ;)

Any desire to refactor that beast?


Same here and I have delivers some big stuff, your single test would have filtered me out and I am without a doubt a 10Xer in my field. We assume what we know is common knowledge because we know it. But each of us knows different things and that is what makes us valuable.

I have to ask the parent poster if they truly feel that people need to know this http://en.wikipedia.org/wiki/Pascal%27s_triangle to deliver web or mobile solutions? You are highlighting the issue that the article is talking about.

Further, a whiteboard is like giving a mechanic a drawing of a car and a hammer and saying fix the head gasket. If creating a row from a Pascal Triangle was a real issue that crossed my desk the first thing I would do is hit Google to find the mathematical formula and explanation of what it does then I would see if there is a standard library and finally I would write it if one does not exist. Your whiteboard does not have search capabilities on it, so how are you expecting someone to show you how they work when you have provided them with any semblance of an environment that they would work in.


I believe you are misunderstanding.

Pascal's Triangle is not some obscure and difficult mathematical formula you are expected to have memorized...This is not "construct Pascals Triangle and if you don't know what that is, too bad..."

It's a very simple concept and doesn't require the knowledge of anything other than simple arithmetic; any competent programmer should be able to write a program to construct one once the concept is explained to them...it's just a couple of for loops, some printfs, and straightforward logic.

That's the whole point the poster was making...really simple programming problems like fizzbuzz etc are useful at quickly weeding out people who lack very basic programming ability.

Constructing Pascals Triangle is obviously not a part of normal programming, but the ability to take an algorithm that has been described to you verbally and translate it into code is at the heart of all programming...that's what programming is basically.

If you can't do it, you are ipso facto not a real programmer.


Then you need to give said competent programmer a computer, a compiler, a debugger and the ability to search for information. You are asking a programmer to show you that they are competent in their daily work, but not giving them any of their tool that they work with to do so. Again it's like asking a frame carpenter to draw a truss and expecting that to somehow represent their daily activity of putting a truss together. If you want to test somebody do it how https://webmynd.com/ does it and make it a pre requisite of getting to the interview that way developers can do it on their terms using their tools of the trade and opt out before they are stuck in a room in a bad situation.

I still contend that you are not gaining much insight and are producing an unnecessary filter that can do you more damage than good. I was on a project once that was cursed, we had a developer have a stroke, I had to have my heart restarted due to SVT from sleep deprivation etc. etc. We literally lost everyone that knew the technology except for me and I was not in good shape, and there was this one guy Tom Marra, he was an artist, but the kid had passion he had never coded in his life, but we where in a really bad situation, once he found out that with source control he could hack the crap out of the code and then just re-pull to fix his mistakes, he set out to teach himself JavaScript, on his own initiative. In less than a month he was contributing code to source control and in 2 months you could not tell his code from a seasoned pro's. The one quality this kid had was passion. You can't identify Tom Marra's by making them solve trivia on a whiteboard.


Well nobody said they aren't being given those things, you certainly might be...however even then it's not true.

None of those things are necessary to solve the problem. There's no need to search for information...all the information you need is contained in the problem. It's an exceedingly simple algorithm, and if you can't take the verbal explanation and translate it into code, you are simply not a programmer.

What would you search for, how to make a for-loop? If you don't know such basics as flow control and basic logic, you probably need to go back to school. If you can't remember the usage of printf then you either picked the wrong language, or ask for help, or simple explain you'll fudge it a little and a good interviewer wouldn't penalize you. I'd note if you are interviewing for a language-specific position and said you are fluent in it, not knowing such a basic API might count against you though...

I think the important thing you are missing is that generally in "whiteboard" type situations, you are not expected to produce a completely correct/compilable program. Generally pseudo-code or whatever language you please is accepted and if you make some small syntax or logic error it's not a big deal. In light of that, a compiler/IDE/computer is not necessary unless you have some other reason to require them...such as a disability.

The important part is to see how one reasons through the process of taking a described algorithm and translating it into code...that is the essence of programming. If you can't do it, you are ipso facto not a programmer.

Your analogy is false...in your example the person is not being tested on the actual job they are expected to perform. Obviously that's not a good test.

In this case the programmer is being tested on precisely what their day to day job is: taking a verbal description of a process and translating it into code....so an accurate analogy would be to ask an architect to sketch a simple truss structure and discuss it, which is not unreasonable.

The entire point here is to give a drop dead simple problem to weed out the people who are fundamentally unfit for the position.


i often look for simple stuff like that, my job is a mixture of C++ MFC, ASP.NET C#, JavaScript, JQuery, Underscore, Java SWT, T-SQL and sometimes VBScript. I do Scheme, Lua and C for fun.

Once in a while i have to look on how to format a string and how to use map, reduce, etc..

do i have to go to school because of that? heck i dont even remember what i had for breakfast yesterday


There's huge a difference between having to look up syntax/api and what is being discussed here. Everyone will occasionally forget the syntax for something, or the argument order for a function... especially in a language they don't use all the time.

For this sort of test, all that is necessary is that you understand the basic idea of a for-loop...the basic idea of how to format a string...remember there's a good chance you're writing in pseudo-code.

Presumably however if you are a programmer, there is at least one language where you should be able to complete this task without looking anything up. It requires only the simplest of language constructs/library functions.

The exception of course is if this is a job for say, a Python programmer...and on your resume you say you are a "Python expert". The fact that you have to look up the syntax for basic constructs in Python would suggest you were lying on your resume. That's one of the things this exercise tests for.

If you said in your resume "I have experience with other languages, but I can learn Python"...then just being able to do this exercise in pseudo-code or any language of your choice would show you have basic programming skills and should be able to pick up Python.

If you really have to remind yourself what a for-loop is then yes, maybe you do have to go back to school/hit the books and study harder.


YmMot,

I see where you're coming from. I think the issue is that 99% (100%?) of all HNers would likely be great hires for most companies, so they see these questions as trivial, dumb, and not representative. Just the fact that someone thinks about these types of questions and reads programming blogs probably puts them ahead of most of the programming population.

The problem is that many of us have really never seen the mythical horrible programmer. The person that really can't even start a solution to FizzBuzz. The one that leaves the interviewer wondering how the person got jobs at other companies. I have seen and interviewed them before and it's frightening and uncomfortable for everyone involved. I've had candidates break down crying when I asked them to write a simple sql select statement on the board because their resume said expert in sql. For many, it's hard to fathom these people are out there, but they do exist.


Exactly. I have interviewed many candidates and later fired several too. Some people who look good on paper are so desperate that they lie about everything! In fact, I had to fire the first person I ever hired, because he claimed to be expert at X and Y (lies and lies), but weak at Z (truth). So ok, didn't seem like a huge problem until I realized that he didn't know how to program... at all. In any language. My mistake was that I didn't test ACTUAL coding skills at all, because why would anyone lie about it? LOL!

FizzBuzz style questions are really important (eg. create an array with 4 numbers: 1, 2, 3 and 4 and output the values. Oh a "Java and Python 'expert'?". Please in both languages then!). Although I agree that home tasks and trick-algorithm questions are useless.


Although I agree that home tasks and trick-algorithm questions are useless.

I don't think home tasks are useless, but they have to used properly. A home task geared to 1-2 hours seems reasonable for an exercise between the first and second interview. Some may say they'll just look up the answer (it's encouraged!), and that's why during the second interview the employer should discuss their answer. If the candidate can explain why they made particular decisions on the take home piece and have an intelligent discussion about the trade offs they took then who cares if they looked up the answer. In fact, the home test is probably closer to real programming since I don't know anyone who codes without Google nearby.

So, my preferred method is to use FizzBuzz as a first cut on the first interview and then give a take home quiz that should take the candidate 1-2 hours or less if they are good.


This is the part I fundamentally disagree with and the reason that I am not a proponent of testing. Because it is loaded with so many assumptions on the part of the interviewer. I once failed to get a JavaScript gig because I did not use the arguments array in a method. The interviewer though that the question given a arbitrary amount of parameters passed into a JavaScript function how would you sum all of the parameters. I content that passing an infinite/arbitrary amount of parameters into a JavaScript function is something that I will never do. Not only that I would have a talk to any developer on my team that did, if it was not some very special need for such magic code. The interviewer thought that this was a perfectly reasonable filter and said so, it was stated as the reason that I did not get the position.

You see that's the crux of the issue, I can be a JavaScript expert and deliver massive JavaScript applications without having to resort to the arguments array to figure out how many parameters where passed into a function. Also, if I ever had a use case for it, I would look up the API and probably mentally discard it shortly afterwards. The interviewer thought that it was a perfectly reasonable filter, but yet I am one of the top JavaScript talents in my market. It did not work for them, because they assumed that knowledge of the arguments array was basic knowledge and any developer worth their salt would know it. Yet in my time working with some great JavaScript developers not a one of us ever used it. Further it harmed them because I now believe they are as incompetent as they believe me to be, as such if someone in my peer group asked about a position with them, I would relay my story. Not to be indignant but to be truthful, as such they lost access to valuable resources in a constrained market.

That being said, one can be an expert in say Java and not know the JDBC API, maybe all of their work revolved around JPA. Many in Java would argue that JDBC is fundamental and should be known to be an expert but a new crop of developers have never know anything but ORM. But we older developers assume that JDBC is a fundamental prerequisite because it was in our time. They cant see anything but the progression from JDBC to JPA because it is the life experience that they had. As such it's flawed and irrelevant. You have to be very careful to not bring interviewer bias into an interview and test are loaded with them. Even the simplest of test.

Does it not strike you that quite a few HN'ers openly admit that they would not pass the filter? Some of them long time contributors and members, some of them very respected? To me, I would question the viability of my assumptions if HN'ers where openly saying no your filter would eliminate me. I mean we have some of the best of the best lurking here. 37 has been complaining about it for a long time, I have had my own threads about it, and everyone agrees that hiring is flawed. Maybe it's time we reevaluate our assumptions based on what guys like the crew at 37 are doing since they seem to be improving on candidate selection over the rest of us.

The point I am making is what you are trying to garner, can be identified by asking a candidate to show you something they built and walk you through the routine they are most proud of. There are going to be functions, loops, if statement etc. in that code, if they can't explain what they built, then you should be suspect. Giving them a test just introduces bias of what the interviewer thinks they should know, and does nothing to tell the interviewer what they know.


"Does it not strike you that quite a few HN'ers openly admit that they would not pass the filter?"

To be quite honest, this thread is having the opposite effect. I'm questioning the quality of HN contributors based on their inability to either comprehend the question or code a simple algorithm.

I passed similar coding questions in interviews before I'd ever written a line of code in my life (in a Microsoft PM internship interview). No one cared if I wrote a semicolon or not - the interview was asking if I understood basic concepts like loops and logic.


I don't think that anyone is arguing that they should not be able to code a simple algorithm. What the original article and other respectable individuals in the industry have been arguing for, is the best way to deduce ones ability to code a simple algorithm. Some people are just so uncomfortable and out of there element coding in an interview situation, that the assumption that it is a negative filter does not hold true. May good candidates get missed over this assumption. What many hiring managers like those at 37 Signal and pen.io have found out, is you can actually identify more quality individuals while deducing the same abilities without placing people in that unnatural coding environment. They are writing about it, because they have seen evidence that it is an unnecessary filter. It is not that one does not need to deduce whether the applicant can code but rather they believe they have found a better, simpler way to deduce it. Their observations coincide with what I have seen in the industry, so I am an advocate for such practices.


"Does it not strike you that quite a few HN'ers openly admit that they would not pass the filter?"

I have yet to see someone who says that they would not pass the filter. Every single example I've seen in this thread is someone making up a different filter and saying that they would not pass that. I see people saying that they took a long time to debug subtle bugs in it, that they don't know the definition of the problem, or that in general they need documentation to code, all of which is completely irrelevant to the filter as given. Subtle bugs are allowed, the definition is given to you, the solution doesn't need any documentation (and if it does, you can just make up whatever API call you expect would be present in a sane standard library)

I'd be shocked to find any good programmer who, given the definition of Pascal's triangle, couldn't produce some decent, possibly buggy, incomplete, and syntactically incorrect, code on a whiteboard to print out a row from it.

I've yet to see anyone give an argument for how a good programmer could fail this test that doesn't rest on some feature of the test that doesn't actually exist.


Subtle bugs are allowed, the definition is given to you, the solution doesn't need any documentation (and if it does, you can just make up whatever API call you expect would be present in a sane standard library)

You are now hitting at the core of the issue, to most developers bugs, bad code, correct API's matter, you are asking them to look past something that is engrained in their work ethic, personality, and quality as a developer to measure their worth as a developer. It does not matter what the filter truly is, their is a perception on the developers part of what they are being tested for, because they attribute what makes them of value to the test. As such the correct solution is what makes them valuable. You can't tell them to give you what they perceive to be the incorrect solution and expect it to accurately reflect what they value in themselves as a developer, that the interviewers bias. The fact that the filter is being misinterpreted is irrelevant or very relevant depending on your perspective. To the interviewee, they are going to perceive the task just as many on HN have and no amount of prodding to give up their nature is going to make them perform well, conversely they are going to perform worse. I have neglected the fact that the filter is being misunderstood for the very fact that, it will be misunderstood and for that alone it sets people up to fail.


Is there such a thing as a good developer who can't write pseudocode because he is unable to look past bugs, bad code, and correct APIs? I've never heard of such a beast.


Mike that is not my position and I apologize if I am not conveying that properly. My position is that white-boarding in front of people you just met for some people will not convey that they can do it. Because they can become distracted by perceiving that they are a spectacle and their every move is under a microscope. Many developers have been in such a situation and walk away from the situation permanently affected by it. It only take one bad interviewer to criticize your every move to make a developer analyze their every move in future interviews. But that does not mean in their natural and comfortable environment they would have such an issue. It is why I prefer to look at code they produced in that natural and comfortable environment.


Stage fright is an excellent point. So excellent that I wonder why nobody else made it before, and instead went for BS reasons like "I need a debugger". Maybe due to stage fright...?


I think the two are linked, some developers feel secure that a compiler / debugger will catch the inevitable simple mistakes that we all make. Not having one in front of an audience makes it that much worse. A lot of those that protest would feel a lot more comfortable in front of people if they knew they had a net to catch the simple errors they make before they show off their code and that's the core of my position on the subject, code test guarantee that these otherwise competent individuals will look incompetent and once they do, it's hard to minimize that experience in the interviewers mind, Steve Jobs called it the bozo bit and once it is flipped it's pretty much flipped for good. The fact that they look incompetent on the whiteboard is such a strong reaction, that it outweighs compensation factors like looking at code already written, in a more conducive environment.


You're in a room with these people having a conversation. A simple "Hey what level of engineering are you looking for in this solution?" would do nicely. Maybe some people will say that it has to be unit tested, rigorously documented, and proven correct, all on a whiteboard, and that would be really dumb. But nobody is going to say that, they're going to say "Give us a sketch - we're just looking for your thought process". The whole thing about misunderstandings and misinterpretations is just really silly when you're sitting there in a room with them. It also isn't just your code that is being judged - if you can't pose a simple question to some people sitting across a desk from you, or if you're too stodgy to ever under any circumstances write pseudocode or a piece of code that you aren't sure works yet, or make up a reasonable API that you don't remember the exact incantation for, maybe because you're too honorable or something, then I don't think you're a culture fit.


It also isn't just your code that is being judged - if you can't pose a simple question to some people sitting across a desk from you, or if you're too stodgy to ever under any circumstances write pseudocode or a piece of code that you aren't sure works yet, or make up a reasonable API that you don't remember the exact incantation for, maybe because you're too honorable or something, then I don't think you're a culture fit.

This is one of the point that I am getting at, you are right it's not just the code that is being judged, candidates know that too, so when they are standing at the whiteboard a million things about being judge may be going through their mind. It has nothing to do with being stodgy or honorable or above the effort and everything to do, with some people just cannot perform in such an environment. People hold out that it is a negative filter, that if you can't pass it, you are a definite negative and observations on a lot of peoples parts are not confirming that assumption. I side with the author of the article that there is an all inclusive way to identify good candidates that fall into this category as well as candidates that would pass the whiteboard test without resorting to the whiteboard. I am not arguing that people should not be able to code, rather I side with the author that there is a better way to identify the people that can and receive more positive hits in the process. Thus reducing the interviewing cycle and identify more talent in a constrained market.


Ah, apologies, I didn't mean to advocate for coding tests as an all-or-nothing negative filter, though I see now that the thread I'm in may suggest that. It can be a good negative filter, but nobody should be disqualified by a single poor answer. Good questions are designed to reveal problem solving skills, so the path taken even to a very poor answer should be illuminating and lead the interviewer to another question.


Generally pseudo-code is accepted and if you make some small error it's not a big deal.

Do you think this accurately reflects the working conditions of a developer? Do you think that they jump into problems and that small errors are OK to them? do you think that they will be comfortable making those small errors in front of people, given that they A) generally would not accept them for themselves, and B) how critical some people can be of code that is not their own? Do you think they will be able to put there best foot forward in such an unnatural situation? They may blank, they may freeze and they may just be the type of person that reflects on an issue for a while before the set out to code.

What would you search for

I personally would search for the mathematical formula and a text description that I could put on my second monitor as reference. But first I would search for a standard library and avoid the effort all together.

taking a verbal description of a process and translating it into code

I personally never work from verbal descriptions, I document the issue, and place it into a ticket system in which I work the tickets based on priority.


I'm sorry but it really seems like you are making a mountain out of a molehill here just to be argumentative.

Do you think this accurately reflects the working conditions of a developer?

Who said anything about trying to simulate a normal development day? This is about finding out some of their thought processes and if they can solve an exceedingly simple coding problem.

They may blank, they may freeze and they may just be the type of person that reflects on an issue for a while before the set out to code.

Where did they say you can't reflect on the issue? That's exactly what they want is you reflecting on the problem and describing how you'd go about solving it.

If someone tells you to write a function that takes a number N and prints out 1..N in pseudo code(Which is basically what the pascal problem asks when they provide the formula for a row) and you can't even talk your way through a possible solution you're not going to be able to complete any interview process, no matter how gentle.

I personally would search for the mathematical formula and a text description that I could put on my second monitor as reference.

So you're complaining that you can't search for something that the interviewer is already providing?

I personally never work from verbal descriptions, I document the issue, and place it into a ticket system in which I work the tickets based on priority.

If you are this incapable of listening to a two or three sentence problem description, ask the interviewer to print it out for you and write "ticket #1" on it? I mean is your nitpick really "They told me the problem out loud instead of writing it down?"

This is even ignoring the fact that taking verbal problem descriptions is an incredibly useful skill as a software engineer. I mean I guess I could tell people to file another ticket and I could have the joy of e-mail tag as I have to send off clarifying questions further delaying the task.


I'm sorry but it really seems like you are making a mountain out of a molehill here just to be argumentative

I assure you I am not trying to be argumentative, I am trying to help people understand why I and the author of the article believe that it is a flawed hiring practice. I have raised some good points, you can reflect on them and utilize them or you can discard them as rubbish. But I can tell you this, I have a lot of success in recruiting and I have learned by committing some of the very mistakes that I now advocate against. I used to buy into the Cargo Cult hiring practices, until someone smarter than me taught me how to truly identify talented individuals.


> I personally would search for the mathematical formula and a text description that I could put on my second monitor as reference.

If you don't know the formula and they haven't volunteered it, you can simulate this search simply by asking, and any interviewer using this for a proper purpose will happily write it on the whiteboard for you and walk you through it if anything is still unclear. It's not what's being tested.

What is being tested is that you know enough about programming to understand pretty much nothing more than basic flow control, basic arithmetic and perhaps trivial IO.

A use of a question like this checks that you've at least read "programming for dummies", and roughly understand the overall concepts of programming not whether or not you're proficient enough to even fill an entry level position. It weeds out the candidates that are completely unable to program at all.

> Do you think they will be able to put there best foot forward in such an unnatural situation? They may blank, they may freeze and they may just be the type of person that reflects on an issue for a while before the set out to code.

I've had smart candidates blank on trivial problems, but there's a very noticeable difference between people who blank and people who just have no clue. With people who blank, you get them to calm down, you ask probing questions to get them talking, even if it's about the weather, and they will eventually start offering up small bits and pieces of an answer that makes sense and gets them to "unfreeze" and start solving the problem. Whereas with the type of candidates this type of interviewing should be used to get rid of, you get absolutely nothing and/or total bullshit.

> But first I would search for a standard library and avoid the effort all together.

That's something you might say as a "by the way", but it isn't what the interviewer will be looking for.

> I personally never work from verbal descriptions, I document the issue, and place it into a ticket system in which I work the tickets based on priority.

Fine, so consider the description on the whiteboard the ticket with your highest priority would be my response to that if a candidate were to raise this objection (though be honest, if they did alarm bells would already be going off in my head).


> Do you think this accurately reflects the working conditions of a developer?

That's not the goal of the exercise however, nor is it a desirable one. The point is to a very loose test of basic ability. You can test the basic ability of a runner by asking them to do a quick sprint; the fact that this is not an accurate simulation of a race is irrelevant.

> do you think that they will be comfortable making those small errors in front of peopl

They should be. Any programmer should be comfortable making errors and owning up to them.

> Do you think that they jump into problems and that small errors are OK to them In this situation it is. In a "real world" situation, those would be caught by the compiler or unit tests. Of course really bad mistakes in this test would count against you.

> They may blank, they may freeze

Of course, this can happen in any type of interview. The point of interviews is to try and get the best balance of weeding out unsuitable people and finding the best ones. No matter what process you use, you will occasionally discard perfectly viable candidates...however this is preferable to accepting unsuitable ones.

> just be the type of person that reflects on an issue for a while before the set out to code.

...and that's fine...I'm not sure why you think this is some sort of "gotcha" test. It's perfectly fine to take some time...nobody is timing you. Again, this is a basic test to make sure you have the simplest skills and to get a little insight into your thinking/working. If you are uncomfortable with some aspect of the test, it is fine to say "well usually I take some time to think" or "I don't like writing on whiteboards"...etc.

You seem to be fundamentally misunderstanding the nature and point of the exercise. Errors are ok and expected because it's a simple test of reasoning and logic ability...likewise jumping in is ok because it's a simple problem.

Do you as a programmer spend a lot of time considering a simple if/else statement? Of course not...you jump in and write it. Again, we are talking about a simple algorithm here...given the description it basically writes itself for anyone with a modicum or programming experience.

The point here is to get a basic assessment of raw ability. The rest of the interview is to determine how well they might fit in the company, and any other gaps can be filled in with training.

> I personally would search for the mathematical formula and a text description that I could put on my second monitor as reference

That's the point you're missing. You don't need to search for that, it's been given to you. The description of the triangle contains the algorithm for constructing it. It's also so simple that you should be able to hold it in memory for the short time necessary, but if needed you could ask for a refresher.

> But first I would search for a standard library and avoid the effort all together.

That would negate the purpose of the test. Again, this is the most basic of basic programming tasks. It tests your ability to write simple logic and flow controls...something fundamental to writing computer programs.

Even if you solve a problem using mostly library functions, you still will need a few simple loops and logic to tie them together, and that's what this is testing.

> I personally never work from verbal descriptions

The verbal here is incidental...the point is you're working form some description or set of requirements. In this instance it's so simple that a verbal description will suffice, perhaps with one or two reminders. If you really can't hold such a simple description in your mind for 20 minutes I'd say you're not suited for any sort of mental work and might need to see a neurologist.

It seems your objections are based on a flawed premise, a fundamental misunderstanding of what the test is, how simple and easy it is, and what it is supposed to be testing.

If you look at want ads for day labor and other physical tasks it will often say something like "must be able to lift 50 lbs". When you went to the job site, they might ask you to lift a bag of concrete weighing 50 pounds. This is an exceedingly simple task and anyone fit for the job will be able to do it easily. It will quickly weed out anyone who is simply unfit for the work required, anyone who is fit will be able to do it easily.


Thank you for taking the time to answer my questions, after reading my grandparent post, I realized that it does come off a little bit hostile. I apologize if it appeared that way to you, it was not my intent. I am really passionate about this subject, because I was one of the worst offenders with testing people and was taught how to not do it and still be able to identify quality candidates. I can tend to get exited about the subject because it was such a wow moment when the revelation struck me.

More specific to your answers I understand your position, and I truly believe that you use it in this manner, the problem I have is that for every one of you there are 20 of the former me's who get so wound up in code tricks and trivia that we create unfair and useless interview filters. As such there is a perception among developers about code test, and that perception can be summed up that I am going to be tested on what the company thinks I should know, not what I know. For some of us, this creates a horrible and humiliating interview experience. In which a developer stands in front of a whiteboard analyzing their every move. No matter how much someone reassures them, they still have everything but logic running through their mind. It only takes one interviewer with an ego to permanently affect a developer in this manner and no amount of reassurance from a person they just met will help someone believe that their every move is not being judged.

Many people with Asperger syndrome fail tech interviews horribly but a good deal of them turn out to be wonderful developers. My best friend is affected by it, and I know from experience that he would never land a job if it where not for me. In spite of that, he is always the star developer at every organization that I bring him into. Now if someone sat him down and said, show me something you built and walk me through a routine, like the article suggest, they would be fascinated by his depth of knowledge.


If you need a compiler and a debugger to implement Pascal's triangle, you're simply not a good programmer. Good programmers write code. They don't just mash keys and wait for a compiler to tell them what to fix.

Any programmer I want to work with will be able to walk up to a whiteboard and write a program to print a row of Pascal's triangle or a function that determines whether one string contains another. Simple programming problems like this are the tiny problems we solve on a daily basis; if you can't implement such trivial functions without bugs, how can you expect to design or implement multi-kloc systems without bugs?


What an insulting and belittling post. As others have pointed out, the Pascal's triangle problem is dependent on intuition regarding the characteristics of the triangle that happen to be the most useful for solving the problem (in this case, that the sides consist of 1s). Moreover, the statement "Good programmers write code. They don't just mash keys and wait for a compiler to tell them what to fix." is ridiculous: not only are you discounting a very useful way to refactor code (change stuff and let the error messages guide you), but every programmer also makes stupid mistakes, and I don't consider that a measure of programmer quality. Maybe 10% of the quality of a programmer is his or her ability to avoid stupid mistakes in the first place; the other 90% that represents real business value is the ability to understand and diagnose mistakes when they do make them.


I don't write code on a whiteboard, you might be comfortable thinking like that, I for one am not. I think it's likely that I'm in the majority.

Until I get a keyboard with a tappety tap tap under my fingers my brain is at all stop.


Software engineering is an interactive endeavor most of the time. My regular day-to-day work often involves getting in front of a whiteboard with another engineer or two and drawing pictures, writing code, iteratively designing things, etc. If you can't think in front of a whiteboard, your working process may not be a good match, at least not for me.


You can't identify Tom Marra's by making them solve trivia on a whiteboard

Tom Marra's don't get hired at these big web co's kind of places.

This is probably the reason why there aren't many Tom Marra's at these big web firms and why don't seem to make any thing awesome anymore, than their first initial offering.

The job these algorithm kids in these big companies seems to do is to some ensure the wheels are rolling and they don't punctured. If they do, fix them by some fancy hackery!


Well why are you talking of mathematics and arithmetic, this is a programming interview not an interview to hire mathematicians.

Besides will you hire a novelist on how well he can solve crosswords?


Of course not. It's not a trivia question to see if you happen to know the term. Or a trivia question to see if you happen to remember some particular trick to solve the problem. It's just an arbitrary task that will have you solve an easy problem that you probably have not done before, and write code for it. As such, the interviewer would obviously give the necessary background information. Either up front or when requested.


I've asked the Pascall's triangle question, but I always go to great length to explain what it is, as even if someone does know it, they probably haven't thought of it in quite some time.


I didn't know what it was either, but I googled it and found a simple explanation on Wikipedia. I'm sure the interviewer would at least give you that basic background information ("Pascal's triangle is a structure in which each number in the triangle is the sum of the two directly above it.")

I don't think the point is that you would be expected to know every historical abstract concept, but rather that given the basics of one, you could write code to implement a data structure around it.


Me either..12 years for me.


The main problem with asking the same question over and over again in interviews is that you are implicitly comparing the current candidate to previous candidates, which isn't fair.

If someone genuinely has never seen a problem before, and then figures it out in 45 mins, this person may look worse than someone who memorized the answer, and solves it in 10 mins.

The best solution is to give a previously unknown question to both the interviewee and the interviewer, and have them work out the solution together. It doesn't have to be a coding question, but the interviewer should be good enough to assess whether or not the person they are interviewing is good, not from how quickly they spit out an answer, but through how they work to get the answer.


This is a great point that other folks haven't brought up. As an interviewer, you need to make sure your "calibration" is genuinely a smart candidate who is creating a novel solution, not a smart candidate who is pretending to create a novel solution while actually coding from memory!


Wow, working through a mutually unknown problem is such a great idea and I had never heard or thought of it before. Thank you sir!


I authentically hate these types of interviews.

1) I don't program on a whiteboard. It's 2012, I have tools, and they beat the whiteboard setup every time.

2) I will never implement something that I have not implemented before without at least first consulting with a search engine. You see, I am not a genius that will come up with the best algorithm for a calculation every time I am put up to the challenge. My amateur attempts will be no where near as good as what others have collectively accomplished over the years that the problem existed.

For example, in this particular case (I have never had to implement a function that generates Pascal's triangle before), I can guarantee you that anything I came up with at a whiteboard right now would be completely inferior to what the result would be if I was allowed to do a search first. So unless you're trying to see if I am capable of writing any old crappy code so long as the result is correct, your test isn't adequate.

3) Interviews still make me nervous, doesn't matter how many times I have them, I never feel comfortable going through the process. This often leads me to simply draw blanks when given a whiteboard assignment of this nature. You may think this means I am just incapable of solving your problem, or that I perform poorly under pressure, but I can tell you that this type of "put you on the spot" pressure is much different from the type one generally feels in a work environment.


1) If the whiteboard and lack of tools is the problem, what would you think about projecting your monitor?

2) Nobody cares how good your solution is, they care about seeing your thought process while solving it. They are likely waiting breathlessly to ask you follow up questions about your poor implementation that will further enlighten them about the way you think. It would be a downright shame to disappoint them by writing a great solution right away.

3) This is a legitimately big problem. Everyone responds to the pressure of interviews differently and it is definitely not necessarily correlated to their response to the pressures of the workplace. Ideally you would notify the interviewer(s) of your anxiety beforehand and they would be willing to work with you on either a short take-home (which a lot of people also hate for other reasons), or some type of working interview (which is often infeasible).

The original article was about a specific-language whiteboard problem, which is open to arguments about tooling and internet access, but many people ask these sorts of questions in terms of pseudocode, structured written english, or even discourse out-loud, and the goal is not to determine programming language or algorithm knowledge, but problem solving skills.

edit: phrasing


Project my monitor/screen sharing actually works very well.

I got an interview in 2006 for Microsoft. I typed my C solution with Notepad via share screen over DSL. The code is "to find out the depth of leaf nodes in a binary tree and find out which leaf node is the shallowest/deepest" or something like that. I did that in 20 minutes and return solution to find out depths for all leaf nodes and sorted them by depth. And the interviewer saw how I worked during the session and laughed I was the first one to solve the problem in that way and much better than what they expect.

At the other hand, I got interviewed in Amazon around the same time. I completely choked on whiteboard because I did not prepare for session before like that and I still had a jet lag due to lack of sleep during flight in previous night. Then I got another interview in Bloomberg and I complete couldn't write code with pencil and paper.

My mind flows when I typed but chokes when I worked on paper/whiteboard. I guess the fluidity of editor is the key.

Later when I interview software engineers. I tried to let them to have a computer so they can write code during the session. I feel much better grasp for their potential and habit during the session than whiteboard and pencil paper test.


I completely agree about projecting - going to paper and pencil after years of vim just feels absurd - but some people find it more invasive and/or high pressure.


For the people who say they never use a whiteboard: do you never talk about problems with colleagues? When trying to understand the best way of tackling something difficult, I find few things beat a whiteboard session of sketching out possible solutions. I find that writing actual code just obscures things when I'm looking for the method or algorithm to do what I want.


I think my experiences with whiteboard effectiveness are limited to

1. Writing down pseudo code for solutions. 2. Diagrams for architectures. 3. Flow charts for possible actions during problem solving.

But if you ask me to write down a real program in whiteboard, I find myself starting to got interrupted by thoughts about erase lines, moving blocks around and worrying about my writing is too big and board is too small for my code.

I think shared whiteboard is definitely a very good tool for discussions. But for me, even a rudimentary editor like notepad, vi helps me more in handling all rewriting/visualizing/pondering process. Probably I am not those programmers who can write down code without worrying about rewriting them. For me, write a program is like oil painting involving constant changes.


I find it amusing that each time an interviewing technique post is made to HN, invariably a post by someone claiming they work for Google is made, always disparaging the manholes problem (I've never even heard of this being posed in the real world) and tossing out some algorithm problem as a throwaway. The best part is the torrent of solutions that inevitably follow, in every popular language regardless of whether their solution has already been presented in another language.

It's almost like Google has some sort of informal process for throwing away old interview questions this way, just for laughs...

This also makes me think I may spend too much time lurking on HN.


Sometimes I wonder if all the posters are people who are just going through the same patterns of behaviors like discussion memes. And then I wonder what if the majority of comments are bots programmed to social network and collect karma/reputation points for SEO and future YCombinator funding?


"Some of you may think such a test is a waste of time but I assure you it is not. It's astonishing how many people can't do this."

Well, I can assure you that as a kernel programmer, I think it IS indeed a waste of time. It's also astonishing that the moment the interviewer asks me something like this, my enthusiasm and interest in the interview drops by more than 50%. When I apply for a kernel programming a job at Google, I am dying to talk about my system internals knowledge, device driver programming experience, subtle differences between the AMD/Intel arch and what not. Instead, what do I get ? The recruiter asks me to recite by heart problems from a popular algorithms book for the pre-screening. It's like the prospective employer is telling me : I don't care what you did in the last three years. Let's start from scratch!

What also bothers me is the the test for coding skills. When a company hires a Linux kernel developer, the best possible way to test for coding skills is to look at my contributions. Instead, I am asked to write code and implement malloc so that the interviewer gets a taste of my coding knowledge. I think this is a huge waste of time.

My point being that the job interview at Google is incredibly generic. I believe that just like you should have a different resume for each job you apply to, the same applies to the interview process too.

More Info: This is very specific to my interview experience at Google. The two times that I applied at Google for Linux kernel development positions, overall I had the same experience as described above.

Some more Info : I also don't understand the idea of the algorithms test during the prescreen. If you go through the algorithms book the recruiter suggests and look up on glassdoor for interview questions, there's a very fair chance that you already know the answer to the question that's being asked!


If you go through the algorithms book the recruiter suggests and look up on glassdoor for interview questions, there's a very fair chance that you already know the answer to the question that's being asked!

Most of the supposed brilliant Algorithm and Math brains are basically people who crawl internet forums for puzzle questions by spending around 30 minutes everyday.

They don't know a jack about algorithms or math. Its just they know enough to game the interview.


5 minute Erlang version:

  -module(pascal).
  -export([tri/1]).

  tri(Depth) ->
    tri(Depth,[[1]]).

  calc_row_rest([_]) ->
    [1];
  calc_row_rest([X,Y | Rest]) ->
    [X + Y | calc_row_rest([Y | Rest])].

  calc_row(Acc) ->
    [1 | calc_row_rest(Acc)].

  tri(X,_) when X =< 0 ->
    [];
  tri(1,Acc) ->
    lists:reverse(Acc);
  tri(Depth,Acc) ->
    NewRow = calc_row(hd(Acc)),
    tri(Depth-1,[NewRow | Acc]).

Returns a list of the pascal triangle rows:

  > pascal:tri(10).
  [[1],
  [1,1],
  [1,2,1],
  [1,3,3,1],
  [1,4,6,4,1],
  [1,5,10,10,5,1],
  [1,6,15,20,15,6,1],
  [1,7,21,35,35,21,7,1],
  [1,8,28,56,70,56,28,8,1],
  [1,9,36,84,126,126,84,36,9,1]]


Agree that a basic test can be useful, but I think two points need to be made here.

1. It falls on the interviewer to be helpful and clear enough to help a candidate understand what is being asked. Often, a brainteaser of any sort explained poorly can be a death knell for anybody, no matter how smart they are. It's imperative that the interviewer is able to thoughtfully analyze a person's line of thinking moreso than whether or not they can get to a right answer.

2. Which brings me to the second point... The issue with a lot of brainteasers for poor interviewers is that it's a binary outcome. Either the candidate already knows the answer and comes off looking like a genius, or the candidate stumbles around in no man's land with no clue what you're talking about. Again, it's key here that the interviewer is able to guide a candidate through the thinking instead of sitting back and enjoying the show (which is admittedly sometimes fun).

I definitely think there is space for brainteasers, but it really requires an interviewer with the right training to get it right (which a company like Google does not have to worry about, but a start-up may). What I've actually heard is an interesting approach for start-ups recruiting is to just have the candidate sit and try to tackle a recent problem the company has faced with the interviewer.


Just to clarify: I used "take-home" by analogy to taking a test in class (= whiteboard) vs. having a week to work on it in as you wished in your dorm room (= puzzle on our website).

We did not actually give people tests to take home. :)

Interviewing is a numbers game. Your goal is to come up with a process that balances time spent by the interviewer with finding a suitable candidate.

You have to admit that this is biased towards a company, like Google, that must hire many more people per unit time than most companies do. (And the Google process seems to work remarkably well for Google.)


This seems like a contrived example. You don't usually want recursive code in your product.

A more useful example was when I asked my high school AP Computer Science class years ago to test if an array of integers contained a poker straight without using a parser, assuming ace=1, jack=11, queen=12, king=13. The key was to see if they knew how to think about algorithms. I wanted them to perform a sort and iterate across the array, but was interested in what they came up with. It tested a lot of important concepts in thinking about problems, which is more important to me than correct syntax on a whiteboard or implementing valid recursion on the fly.


I give a short and fairly simple whiteboard programming problem when I interview programmers, and I feel like the results have correlated really well with the programmers' eventual success at our company. Of course, there could be false negatives.

I watch them work and help them out as they go; I'd get a ton less info if I just looked at the finished product.

We do also have a couple of "off line" coding tasks, which are even more informative (if I had to ditch one, I'd ditch the live one).


(So I had 10 minutes to kill) -----

import scala.collection.mutable.HashMap

import scala.math.BigInt

val pascals = HashMap[Int,List[BigInt]]()

def pascal(row:Int):List[BigInt] = {

val result:List[BigInt] = row match {

case 1 =>List(1)

case 2 =>List(1,1)

case n =>List.tabulate[BigInt](n)(x=>{

  if( x == 0 || x == n-1 ) 1

  else pascals(n-1)(x-1)+pascals(n-1)(x)
})

}

pascals+=(row->result)

result

}

(1 to 100).foreach(row=>println(pascal(row)))

------

Results: ---- List(1)

List(1, 1)

List(1, 2, 1)

List(1, 3, 3, 1)

List(1, 4, 6, 4, 1)

List(1, 5, 10, 10, 5, 1)

List(1, 6, 15, 20, 15, 6, 1)

List(1, 7, 21, 35, 35, 21, 7, 1)

List(1, 8, 28, 56, 70, 56, 28, 8, 1)

List(1, 9, 36, 84, 126, 126, 84, 36, 9, 1)

List(1, 10, 45, 120, 210, 252, 210, 120, 45, 10, 1)

...and 90 more rows -------

Prints Pascal triangles some 10,000 rows & beyond. So where's my Google job ?:)

Kidding aside, In 1999,I got hired at Sun Microsystems based on a 1-hour written exam. Chap gave me a pencil, paper & few programming problems & left me alone in a room. No laptop, no books, nothing. After 1 hour, he grabbed my paper, looked through the work & said "You are hired". Then we went for coffee.

otoh, I didn't get hired at 2 quant firms in 2011 based on this Whiteboard style interviewing. I got the first puzzle right & then halfway through the 2nd puzzle the chap said "There's an easier solution" & then they upped the ante ( Stirling numbers & then convolution integral & then I think they decided I wasn't a good fit because I couldn't work out every single theorem right there and then. I can do any number of problems, but if you could please write them down on a piece of paper & leave me alone for an hour...I can't quite think and write in front of you while you are passing snide comments on my thought process. Most of my professors when faced with a problem they haven't seen before typically say "I'll get back to you", and then in an hour or two, they call me to their office and show me their work. They don't try to work out a solution right in front of a classroom of 100 gawking students. They do so in the privacy of their office. No offence ( cause I still want that google job :-)


Here's my naive Ruby version for what it's worth. It took me way too long, perhaps because I have a nasty cold. I've been programming since 1983. If I showed up for a code interview today, I'd probably fail miserably.

  def print_pascal(rows)
    row=[1, 0]
    (1..rows).each do |r|
      next_row = Array.new(r + 1)
      next_row[0] = 1 
      (1..row.length-1).each do |i|
        next_row[i] = row[i] + row[i-1]
      end
      print row[0..-2], "\n"
      row = next_row << 0 
    end
  end

  print_pascal(100)


Couple minutes in ruby without using the knowledge of the binomial theorem or coefficients.

  def p(row)
    [1,Range.new(0, row.length - 2).map {|l|     
      row[l] + row[l+1] 
      }, 1].flatten
  end

  a = [1]
  while true
    a = p(a)
    puts a.inspect
  end

edit: fixed formatting, sorry :(


Here is my version. Took me less than 5 minutes and worked the first time:

  def pascal(num)
    return [1] if num == 1
    res = []
    prev = [0] + pascal(num-1) + [0]
    (1..num).to_a.each { |i| res << (prev[i-1] + prev[i]) }
    return res
  end


Recursion in this case - generally in ruby as well - is not a great idea. Also, you have quite high memory requirements: O(n^2)

O(n) is quite easy in this problem.


Interesting observation about Sun: I got hired into Sun (in the 2000s), not by a brainteaser puzzle or bureaucratic game but with a bit of cleverness and discussion. Yes, Sun was big and not totally uniform, but I'll say that Sun got some seriously good talent without having to pull some of the stunts recruiters do today.


u mad bro? your solution looks quite ugly

in ruby

   def fact(n)
     n == 0 ? 1 : n.downto(1).inject(:*)
   end

  def pascal(n)
   0.upto(n) {|r| print "#{fact(n)/(fact(r)*fact(n - r))} "}
  end


Yes, using the binomial coefficient formula is another solution that is shorter, but I downvoted you for not being civil. Please read: http://ycombinator.com/newsguidelines.html


Just an asie: Printing Pascal's triangle for upto 10 rows is trivial. The factorials don't get big enough so you can simply use binomial coefficients (like in your ruby solution). But if you want Pascal's triangle for the 10,000th row => You can use what I wrote. If you have an easier implementation, I'd be very interested.


Here's a way to do it in Haskell:

let step row = zipWith (+) (0:row) (row++[0])

let triangle = iterate step [1]

take 10 triangle


1. I notice your solution uses recursion (at least I assume it uses recursion - I am not a ruby programmer). I think this would be an issue for large values of n (stack overflow).

2. I think an iterative function for fact would be more efficient.

3. I notice you do not memoize the result of fact even though you are calculating it twice for each call.

I suspect that you are trying to show how compact a solution can be, but in an interview situation, I would probably be more impressed by someone who is aware of the issues above.


What I would find really fascinating is if someone could explain to me why programmers have the urge to solve a programming interview problem whenever one is posted.

Not that anyone will ever read this... I thought I'd managed to escape this time, but then I thought it might be some good practice, so I timed myself...

  nitrogen@n2:~ $ time bash
  nitrogen@n2:~ $ vim /tmp/pascal.c
  nitrogen@n2:~ $ cd /tmp
  nitrogen@n2:/tmp $ gcc -std=c99 -Wall -Wextra pascal.c -o pascal
  pascal.c: In function ‘main’:
  pascal.c:15:21: error: expected ‘;’ before ‘)’ token
  pascal.c:15:21: error: expected statement before ‘)’ token
  pascal.c:16:2: error: too few arguments to function ‘calloc’
  nitrogen@n2:/tmp $ vim pascal.c
  nitrogen@n2:/tmp $ vim pascal.c
  nitrogen@n2:/tmp $ ./pas^C
  nitrogen@n2:/tmp $ gcc -std=c99 -Wall -Wextra pascal.c -o pascal
  nitrogen@n2:/tmp $ ./pascal 1
  1 
  nitrogen@n2:/tmp $ ./pascal 2
  1 1 
  nitrogen@n2:/tmp $ ./pascal 3
  1 2 1 
  nitrogen@n2:/tmp $ ./pascal 4
  1 3 3 1 
  nitrogen@n2:/tmp $ ./pascal 5
  1 4 6 4 1 
  nitrogen@n2:/tmp $ ./pascal 6
  1 5 10 10 5 1 
  nitrogen@n2:/tmp $ ./pascal 7
  1 6 15 20 15 6 1 
  nitrogen@n2:/tmp $ exit
  exit
  
  real	4m5.621s
  user	0m0.928s
  sys	0m0.136s
The initial syntax errors were including an extra parenthesis after atoi() and passing a total size to calloc() instead of an element count and element size. My solution is vulnerable to integer overflow and doesn't handle a negative row parameter. Simplifying assumptions: the first row and column will always be 1, so I don't recalculate it. Everything else just gets shifted in at the right. The code:

  #include <stdio.h>
  #include <stdlib.h>
  
  int main(int argc, char *argv[])
  {
  	int row;
  	int *data;
  	int i, j;
  
  	if(argc != 2) {
  		printf("Usage: %s <row>\n", argv[0]);
  		return -1;
  	}
  
  	row = atoi(argv[1]);
  	data = calloc(row, sizeof(int));
  
  	data[0] = 1;
  
  	for(i = 1; i < row; i++) {
  		for(j = i; j > 0; j--) {
  			data[j] = data[j] + data[j - 1];
  		}
  	}
  
  	for(i = 0; i < row; i++) {
  		printf("%d ", data[i]);
  	}
  	printf("\n");
  
  	return 0;
  }


Pen.io founders had pretty much the same experience: http://devinterviews.pen.io/


In Java

public class Pascal

{

      public static long[]  getRow(int rowNum)

      {
		long[]  thisRow = new long[rowNum];
		if(rowNum == 1)
		{
			thisRow[0] = 1;
		}

		else
		{
			long[]  lastRow = getRow(rowNum - 1);
			thisRow[0] = 1;
			thisRow[rowNum - 1] = 1;

			for(int i = 1; i < rowNum - 1; i++)
			{
				thisRow[i] = lastRow[i - 1] + lastRow[i];
			}
		}

		return thisRow;
	}
	public static void main(String args[])
	{
		int getRowNum = Integer.parseInt(args[0]);
		long[]  pasc = Pascal.getRow(getRowNum);
		for(int i = 0; i < pasc.length; i++)
		{
			System.out.print(pasc[i] + " ");
		}
		System.out.print("\n");
	}
}

Edit, cleaned up code:

How do I paste into here whilst preserving line breaks?


I should have set a timer on this one but I think it took 10 mins. This is what I ended up with in javascript - notice that it can print the whole triangle as a by-product of the implementation, but also because I have a (bad) habit of doing more than just what was required. Most of the implementation (testing) issues I suffered were due to lack of js knowledge (where is clone?) but the original skeleton of code that I wrote just thinking about the problem didn't need to change. I wouldn't have got it exactly right without a debugger though.

   function printPascal(x, triangle) {
      var row = [];
      while (x) {
         var next = clone(row);
         next.push(1);
         for (var i = 0; i < row.length - 1; i++)
            next[i + 1] = row[i] + row[i + 1];
         row = next;
         x--;
         if (triangle)
            print(whitespace(x) + row);
      }
      if (!triangle)
         print(row);
   }

   function clone(list) {
      var clone = [];
      for (i in list)
         clone.push(list[i]);
      return clone;
   }

   function whitespace(x) {
      var white = " ";
      while (x) {
         white += " ";
         x--;
      }
      return white;
   }

   printPascal(10);
Ignoring the whitespace and clone functions, is there a much simpler way of doing this?


Happy fun time:

    pascal = iterate (\xs -> zipWith (+) ([0] ++ xs) (xs ++ [0])) [1]
Bread and butter:

    void pascal(int n, int *buf)
    {
        for (int row = 0; row < n; row++) {
            for (int prev = 0, col = 0; col < row; col++) {
                int curr = buf[col];
                buf[col] += prev;
                prev = curr;
            }
            buf[row] = 1;
        }
    }


I like both of those solutions. The Haskell solution is almost the same as what I had come up with but I wasn't familiar with iterate, which makes it even nicer. So thanks for that.

Looking at the C version, I wondered if it could be done without temporaries curr and prev. My no-temporaries version is not really any nicer, though it does shave off a few lines. :P

    void pascal(int n, int *buf)
    {
        for (int i = 0; i < n; i++) {
            buf[i] = 1;
            for (int j = i - 1; j > 0; j--)
                buf[j] += buf[j - 1];
        }
    }


Yeah, that works. My left-to-right traversal would be also a little nicer if C had parallel assignments as in Python:

    for i in range(n):
        p = 0
        for j in range(i):
            buf[j], p = buf[j] + p, buf[j]
        buf[i] = 1
Of course, you can also do this in C with a riff on the old in-place swap trick, e.g. buf[j] = buf[j] + p, p = buf[j] - p, but that's too clever by half, albeit kind of cool.


F# doesn't have an in-built iterate function, but found an equivalent on the F# Snippets site -- http://fssnip.net/18.

    let rec iterate f value = seq {
        yield value
        yield! iterate f (f value) }
Using the above we get ...

    let pascal = iterate (fun xs -> List.map2 (+) (0::xs) (xs @ [0])) [1]


And now, for your entertainment, an implementation in a language I'm pretty confident has a user base of just me:

    gimli> pascal <- local {
    gimli+   loop <- function(n, xs) {
    gimli+     if n > 0 then do
    gimli+       print(xs);
    gimli+       loop(n - 1, [0, xs] + [xs, 0]);
    gimli+     end
    gimli+   };
    gimli+   function(n) loop(n, [1])
    gimli+ }
    func(n) loop(n,[1])

    gimli> pascal(0)
    F

    gimli> pascal(1)
    1
    F

    gimli> pascal(9)
    1
    [1,1]
    [1,2,1]
    [1,3,3,1]
    [1,4,6,4,1]
    [1,5,10,10,5,1]
    [1,6,15,20,15,6,1]
    [1,7,21,35,35,21,7,1]
    [1,8,28,56,70,56,28,8,1]
    F
To clear up what's happening in the recursive call to loop, the (+) operator represents vector addition, not list concatenation:

    gimli> xs <- [1]
    1

    gimli> xs <- [0, xs] + [xs, 0]
    [1,1]

    gimli> xs <- [0, xs] + [xs, 0]
    [1,2,1]


I've written and submitted a post that further elaborates my views on how to properly use programming problems in interviews (with a discussion of Pascal's Triangle in particular):

http://news.ycombinator.com/item?id=3431107


Cletus - I had never coded (or even thought) of this problem before, but your post intrigued me enough to try it out.

I put something out in Java in about 20 minutes and 50 lines using a PT class, a newRow() method that swaps the current row with the previous row and sets up the new current off of the previous (prev[element] + prev[element -1]), and a printRow() method.

My solution works well, but I'm curious how you would assess this solution during an interview. Would my language choice be a pro/con? What is the avg time you see it take an interviewee to complete this problem and how many LOC is it usually accomplished in under the pressure of an interview?


This kind of problem has really short solutions when you have access to collection-at-a-time operations. For example, in Haskell:

    pascal 1 = [1]
    pascal i = zipWith (+) ([0] ++ pascal (i - 1)) (pascal (i - 1) ++ [0])
I'm not answering your question but just wanted to pique your interest in this because you were asking about language choice and had mentioned that your solution took 50 lines.

The above solution expresses the recursive solution as the sum of two earlier rows after appending some zeros. There is very little room for errors to hide and it provides you with a kind of 'definitional' starting point to begin optimizing if you want.


Beginner Haskell Solution:

pascal 1 = [1]

pascal 2 = [1,1]

pascal n = [1] ++ sum' (pascal (n - 1))

sum' [] = []

sum' [1] = [1]

sum' (x:xs) = [x + head' xs] ++ sum' xs

head' [] = 0

head' xs = head xs


1) When you're writing a function on integers or lists using explicit recursion, usually there's need for only one base case. Do you really need a separate definition for pascal 2? What about sum' [1]?

2) Often it's better to use fold instead of explicit recursion. If you wonder whether you can rewrite your function in terms of fold...

"A tutorial on the universality and expressiveness of fold" http://www.cs.nott.ac.uk/~gmh/fold.pdf


Your solution is less important than your process for getting there. The interviewer wants to determine how you think about problem solving. He wants to make sure that you do not just hack at the code until it seems to work. The amount of time required is generally not important unless it becomes excessive. For a problem like this 10 to 15 minutes is probably reasonable, 30 minutes might be a red flag and 45 minutes might end the interview (these times are very rough estimates and might be longer depending on the amount of discussion that takes place). Once a working solution is developed it is common (in general, I do not work at Google) to ask an applicant how they might improve or optimize it without actually having them do the additional work.


I think for internships and early programming jobs, these such questions are are great idea. They show if you've learned anything in your CS degree.

My implementation in 15 min using javascript. BTW I'm a freshmen CS major.

function row(len){

    if (len==1){
        return [1]
    }else{
        var p = row(len-1),
            arr = [];
        for (var i=0; i<len; i++){
            if (i==0){
                arr.push(p[0]);
            } else if (i == len-1){
                arr.push(p[i-1]);                
            } else {
                arr.push(p[i]+p[i-1]);                
            }
        }  
        return arr;
    }        
}


> Just because you can create a function to return a row of Pascal's triangle doesn't mean you're a great programmer but if you can't, it almost certainly means you aren't. It's a negative signal filter, nothing more, but an incredibly quick and useful one.

Or as we say in the medical field, this test has high "sensitivity" :)

http://en.wikipedia.org/wiki/Sensitivity_and_specificity#Sen...


My quick litmus test is to have someone write simple "map" and "reduce" functions in JavaScript.


That's funny to me, it seems your simple examples do the same thing as the Pascall's triangle example. I work in Javascript on a mostly daily basis and had no idea what they were. After about five minutes of researching I see what they are and how they work. The reason being is my day-to-day doesn't involve a large number of arrays, therefore I've had little need for those two examples. If I wasn't allowed to research those beforehand or have them explained then I would fail. Unless you allowed me to research it to show I can grasp new concepts and use them as needed.

Now I've seen them I'll keep them in mind if I ever have a need for them.

Although, in my research it was stated that these were extensions to the ECMA-262 standard and may not work everywhere. Therefore you would have to write up your own Array.prototype.map and Array.prototype.reduce to make them work. Is that true? If so, wouldn't seem simple to me unless you use them on a daily basis.


> If I wasn't allowed to research those beforehand or have them explained then I would fail

It's implicit in all of these that the concept is explained to you. Nobody would ask you to "implement a X" expecting you to know what X was (be it Pascals triangle, fizzbuzz, map, etc) unless X happened to be a fundamental concept in the field you are working in (not the case for any of the examples so far)

> Therefore you would have to write up your own Array.prototype.map and Array.prototype.reduce to make them work. Is that true?

Yes, that's sort of the point here. Javascript has these functions as methods of arrays (in some implementations), but the idea of the test is to write your own. They're both pretty simple, basically a for-loop and some logic.

On another note I have a hard time believing you work in JavaScript on a day to day basis and are not working with arrays. If you use jQuery or a similar library, you are working with arrays all the time and also using map-like functionality even if you don't know it by that name.

$(".foo").click(function(){})

Takes an array of elements and maps over them.


Well, that's true. I use jQuery and similar libraries as I primarily work on front end development for websites. I was speaking more of working with raw javascript and digging into the array without a library. But if you asked me to do what map() and reduce() ultimately do, which you say would happen more or less, then I'm fairly certain I could complete the task. It was more that I was saying I was unfamiliar with those two particular terms.


    (fn [r]
      (reduce #(cons
        (* (first %1)
           (/ (- r %2) %2))
      %1) [1] (range 1 r)))


#!/usr/local/bin/perl

# print a row of pascal's triangle

print "1\n";

:)


Whenever I see a firm use brainteasers during the interview process, I just wonder why they don't simply recruit the best Sudoku players. That seems about as valid a way to judge software development skills.


Sudoku measures how creative you are at finding the best logical solution, so that may actually be a good idea :)

I suggest this with tongue-in-cheek, of course, but it's about as effective as any other method.


I work for a web development agency, which develops projects for some of the largest companies. We generally work on App Engine (python).

At the day of my interview, they didn't really ask me any brain teasers.

They asked if I knew App Engine, I showed them my open source project for App Engine.

They asked if I knew django-nonrel, again I showed them another open source project of mine, that used django-nonrel.

That was it.

Since the day I was hired, I did plenty of good work. Now I'm entrusted with a global CMS project which is being used by hundreds people and viewed by millions.

If they had asked me brainteasers, I would probably have failed most of them, as I don't really practise with those kind of problems. But I like building web apps. And that's what I am doing right now.

If your company earns money by solving brainteasers, then you should ask them during interview.


Companies like Google hire hundreds (thousands?) of developers per year, hiring policies at 37Signals are bound to be different from those at Google.

If you have to weed out thousands of candidates, trying brainteasers to estimate their problem-solving capabilities doesn't seem to be a bad strategy. If a candidate is not familiar with a particular problem, his/her solution may be suboptimal, but the process of getting to that (suboptimal) solution may be a good indicator of the candidate's problem solving skills.

Obviously, syntax-checking a candidate's on-paper Javascript code isn't going to help anyone.


google does not use "brainteasers" and in fact has a specific policy against that kind of question. they ask substantive technical questions that often involving coding on the whiteboard.


I applied for a developer job at Google once and eventually had two phone interviews.

I was asked to come up with a sorting algorithm that would run on 4 computers with more data than could fit on any one computer. I was then asked to do a complexity analysis on my algorithm.

The only thing I think that really tested was how long it had been since I took an algorithms course in college.


I had a similar one. Actually interviewing for a management position with no coding involved. The worst part was that a lot of the questions were imprecise and/or I could tell he expected an answer to a different question that "normal people" would look up rather than remember by rote.

One question that annoyed me in particular was "what is stored in an inode?". First problem: Lots of file systems don't have inodes, and those that do store different information in them. Ok, explained that and asked for clarification. He wants the inodes in UFS. I told him I have no idea - I've never had any reason to look into the on-disk structure of UFS. Why would I? BSD is specifically not on my resume, though I've used it now and again. I offered up the description of ext2fs instead, since I happen to roughly remember that, and talked around differences with other filesystems. But he was clearly not happy.

This was one of a series of similar

The recruiter was annoyed to no end, and managed to get the interview result thrown out, but by then I'd gotten so annoyed about the whole process (which had dragged out for weeks) that I just wasn't interested any more.


Sorry, but that's a reasonable question for assessing a particular type of skill. That you seem to think they're looking for you to remember some particular parallel sorting algorithm from college tells me that this probably isn't your strongest skill. That doesn't mean you're a bad programmer.


It's been seven years since I've taken an algorithm course, but I have a generic (assuming nothing about the input data other than it doesn't fit in a single machine's main memory and is uniformly distributed) solution in my head as well as a complexity time for it.

That said, I work on distributed systems for a living and have actually had to do similar things (manipulate data sets larger than a single machine's main memory across a cluster of machines).

On the other hand, that's not a skill 37signals needs. It's insane to think that Google, 37signals, and a random startup could and should use the same interview process.


I used to work at Google. When I interviewed I was rusty on algorithms, so I studied up on them, even delaying my on-site interview to have more time to get ready. Sure enough they asked algorithms questions and I did better on them thanks to my studying. So I think it's also testing whether, given strong evidence on what would be helpful preparation, you choose to do that preparation. (Obligatory Paul Graham essay reference - "If college applicants realized how quick and impersonal most selection processes are, they'd make more effort to sell themselves" [0])

[0] Two Kinds of Judgment http://www.paulgraham.com/judgement.html


A problem with that is that they assume they're the most attractive option. They might be to some, but I think that pool of people is dwindling. It took five attempts from different Google recruiters before one managed to "sell" Google sufficiently for me to be willing to do a phone interview.

Then I was given interview questions of a type that nobody else would give for a position at the level I interviewed for, and I quickly lost interest again as to me the interview questions advertised a lack of understanding of the skills required for the position, or a fundamental disagreement about how technical management should be carried out.

And pleading from the recruiter was not sufficient to get me to do another interview (after she got the result of the initial interview thrown out because she agreed with my assessment). The last recruiter I dealt with was good, but the impression I got of the interview process put me off for a long time.

It took several months, and in that time they'd still not been able to find anyone else. I've never, ever spent that long on a recruitment process before, and the only reason I bothered was out of curiosity and because I wasn't really looking. If I had been looking, I'd have found and taken another job long before Google would've managed to get their act together.

I make an effort to sell myself for positions I badly want, but Google doesn't seem to be interested in selling itself in the recruitment process whereas most other companies I've interviewed for are, and they certainly isn't even top 10 of list of places I want to work.

But every time I've dealt with Google recruiters, I've had to waste time dragging information out of them that I'd normally expect the recruiter to be eager to present to me.

I'm not going to make an effort if they've not gotten me to really want the job first.

Recruiters I've spoken to all tells me the people they take out of Google are very often disillusioned and thought it would be very different to their actual experience, and that dislodging Google employees have gotten much easier as it's no longer so much the pinnacle of cool places to work in the eyes of a growing number of engineers, so I think that's going to become a bigger and bigger issue for them when hiring.


All fair points. I had one friend who interviewed with about 13 different people at Google before getting an offer, which was totally absurd, and a lot of their friends got a negative impression based on that (soon after they set a cap of about 8 total interviews per candidate). So I can imagine having their interview process do other bad things, like not being forthcoming with information or trying to sell the job.


It's testing how well you can deal with problems that don't fit on a single node -- most problems at Google don't. And if you can't do complexity analysis, you can't scale to very large datasets -- which Google has lots of.


This was a few years ago, but I was given a brain teaser/game where I had to find the differences between two sets of Google doodles. It was supposed to prove I had attention to detail.


What type of position did you apply for? Questions for software engineers and positions like product manager are likely vastly different.


It was for a front-end design/programming position.


If I were applying, I'd still consider this a "brainteaser." It serves the same purpose and is likely equally abstracted from the tasks of the real job.


I would submit that the biggest reason for this view and the reason many people hold it is that 37signals as well as most of the programming jobs are CRUD jobs where non-algorithmic skills are probably more important than algorithmic skills and IQ is also less important. Brainteasers are an IQ test where you are not concerned about false negatives.


I agree with CRUD jobs. And I think in reality most programmer jobs are like that. Unless you get to the point where you design something (ex: a system) and then implement it.


I'd feel uneasy if asked to show some of my code.

Obviously i cannot show something i'm currently working on, and everytime I look back at some code I wrote six moths ago I find myself wondering "did I really write this piece of crap?!". And it's always been this way - at least since i've been six months into programming. I hope it is a good sign.


It's a great sign as long as code you wrote 6 months ago is better than code you wrote 12 months ago--it means you're getting better and learning.

I think as long as you're applying for jobs you feel you're appropriately skilled for, you shouldn't get too self-conscience about your code. Like most personal things, it probably looks worse to you than it does to someone else. The fact that you are self-conscience about it, likely means you have enough awareness to prevent you from becoming overconfident in your skills.


For what it's worth, I expect people to have that opinion about their (older) code samples. If you get the feeling that people are judging you by the poorer sections of your code sample, speak up! Start telling them what you think is wrong with it. They might not be worth working for if they don't listen to that.


Obviously i cannot show something i'm currently working on

Is this obvious? It's not like photographers don't have a portfolio of past project that they use to showcase their work. Unless you code under an NDA, of course.


I knew an engineer who used the 'why are manhole covers round' question in every interview. He was always shocked at how 'quickly' the interviewee figured out the 'answer' of "so it doesn't fall down the hole". These new kids - they're smart. As far as I know, he never figured out that 90% of people have already heard that one.


My answer to the question was that a covers follow the shape of the hole underneath; the hole underneath is round because they are generally cut with rotating tools that create a circular shape. Cutting square holes is more expensive.


pffft, what a silly question. Manhole covers are round because manholes are round...why would you make a square cover for a round hole?

(Seriously though there's a lot of conceivable reasons, and while that's one I doubt it's a major one, a lip prevents it. I think the fact that a circle doesn't need to be "lined up" and that they are easier to move, etc are much more important)


Read the comments, and I've got to say: I'm not one of you guys. I don't write code, I build products (otherwise known as software), that human beings use.

I looked up Pascal's triangle in Wikipedia, so now I know what it is, and I could write the code tonight to handle the shit out of it, but I'd probably mess it up on a white board when commanded to perform like a monkey in front of guys in T-shirts.

I own and run a software company that I started myself. I built the first 5 successful products (not code, products - there's a difference [kind of a big one too]) myself, and I don't give a crap about Pascal's triangle. I'd probably fail every coding test on the planet, but when it comes to delivering finished, polished, working products, that 1000s of human beings will actually use without picking up a phone and calling customer support, I've got that covered.

Stay classy with your 'code' guys.


I was rejected in a recent interview (2 months later I'm waiting to 'hear back' means rejection) where I wasn't competent at writing algorithms to do various tasks such as implement an insertion sort based on this image: http://en.wikipedia.org/wiki/File:Insertion_sort_animation.g...

My confusion was the job was looking for a generalist who had experience with lots of different technologies. Specifically, working with node.js and socket.io polling methods, API design, working with some python framework and backbone.js.

I'm honestly not sure what the lesson there was. Am I a bad developer? Do I navigate interviews poorly? Does my breath stink?


I wouldn't take it personally. People get hired for all kinds of reasons, and not getting a specific position just means that they didn't think you were the best candidate for that position. It certainly doesn't mean you are a bad developer or anything is wrong with you. It could be they were looking for someone more experienced, less experienced, different experience, salary expectations, the job description they gave didn't accurately describe the position they are hiring for, etc...

If an employer says they will get back to you and 2 weeks go by, it's worth contacting them and asking if they have made a decision yet. If they say yes and you don't have any clue why they wouldn't have hired you, I don't think it hurts to ask them why not. You might get a BS answer or you might get some good insight.

Just keep improving your skills, and keep looking.


Was the image the only description of the insertion sort algorithm you were given? If so, sounds like a pretty flawed interview process.


Not sure I agree.

I wouldn't have been able to tell you what an insertion sort is, by name. But by looking at that image, I can describe it pretty well:

Have an outer loop that counts from the first position in the list to the last position (call it n...). At each step in the loop, make sure that the first n items are sorted. This probably implies an inner loop that finds where, within the first n-1 items to insert the nth item. And given that with only one item, it's already sorted, we can probably start with n on the second item.

Yeah, I think I could code that, given the image.


That image is horrible. I could code up insertion sort pretty quickly. Trying to figure out what that image means though...


I'm convinced that interviews are a complete crapshoot for both sides. My first job interview went like this.

1. 45mins late for the interview.

2. Quizzed a chain of brilliant engineers. Couldn't answer a singe technical question they asked. Lots of "Sorry my experience was more with X, not Y".

3. Flatly said "You guys are looking for someone else, I don't have the skills you're asking for".

->Hired because I was honest. Learned quick. Promoted every year.


Awesome story.

Who was the employer?


Unfortunately a large number of us work for employers with strict IP agreements and we cannot just show a potential employer some code or work on a small project for them so that they may "try [us] out". Even working on open source projects can be tricky.


I haven't had to really be interviewed in 15yrs, but the last one I had was probably the best. I walked into the office of the guy who would be my manager, he introduced himself then immediately turned to the whiteboard and started writing and drawing while saying "This is what I'm about to start working on, we need to figure out how to do it.". We spent the next 3 hours working out a solution together.

I ended up not accepting their offer, mainly because of the things I learned in the interview. But I still feel like that was a good way for both sides to evaluate each other.


I thought they hire them based on their choice of personal computers (sorry, couldn't resist)?

http://david.heinemeierhansson.com/arc/000433.html


I'm really curious what the HN's thoughts are about this. I didn't catch the discussion back when it might have happened on this board.


To my knowledge, it has never been submitted. Perhaps you should; I'm curious as well.


I think that article was pre-Win7, so it isn't really relevant anymore, except maybe in a specifically Ruby niche (which is fine for 37signals).

I can't say for sure because the post doesn't have date (day of year but no year). I wouldn't argue smart-aleck technical shibboleths with someone who can't display a date unambiguously without overflow rollover bugs.


Here is Jeff Atwood's response to that post dated Feb 2008, which gives you some idea about the date of the DHH post:

http://www.codinghorror.com/blog/2008/02/douchebaggery.html


Turns out David was right, go to any open source focused conference and you will see everyone on an Apple computer.


In what way does Windows 7 change things?


Puzzles and brainteasers seem rather off-putting to me in an interview environment. It's like the interviewer is saying, "I can solve this, are you as smart as I am?", which isn't the type of relationship I want with coworkers. I prefer to steer interviews into a more collaborative conversation, to see if I would like working with the person.

Anyone who expects me to spend 3 or more hours or so on some take home test just to interview with them, or thinks that the only people worth hiring are those that "the first thing you want to do when you wake up in the morning is code something" aren't people I want to work for. I love programming, but at the end of the work day I'm going to go home and engage in other aspects of life. I wouldn't want a boss that had different expectations.


I saw outright that I'm not an algo guy, you already have those types of people -- namely the guy interviewing me. But they dont know how to scale systems or know overall product. If they want to look at code even I call bad, I can point them at my github account :)


In my experience, I haven't found this to be true. The algo-guys I've met are also critical thinkers when it comes to designing scalable and reliable systems.


I love the honesty and refreshing look your post delivers. It hadn't occurred to me until I read it that maybe there is a programmer personality type that actually likes being on display, coding on a whiteboard capable of doing recursive algorithms in 2 minutes flat. Thanks.


I made a post on HN a couple of weeks ago basically saying the same thing. It can be rather frustrating to be asked questions with seemingly no relevance to the actual position.

I understand what they're going for though - the companies that do it correctly are looking at the candidates thought process through a challenging situation.

While I think it's important to figure this out, I believe it's more important to know how well the candidate can do the actual job, that of sitting down and writing quality software. Brainteasers and writing modified sorting algorithms won't get this (unless the company is in the sort algorithm business I suppose) but having the candidate write code which could actually be used in the job would.


I remember reading Heroku does similar. Using applicants in a project works well.

I have always tried to push my potential employers to use me on a small real life project. I know it's time consuming (and you end up working for em for free) but it works both ways. I get to know what kind of work environment they have and they get to know me in a real deal. I find it much more comfortable because the puzzles always seem to be a hit and go. The solution may click or it may not.

I know it's a start but if this catches in the industry I expect employers to even pay for your 30-40 hours interview.


I expect employers to even pay for your 30-40 hours interview.

It's great if the new employer pays you. But what about your current employer, the one at which you've just had to spend half your annual allotment of vacation days in order to audition for a new job that you might not get?

Auditioning is definitely the best way to, well, audition people. But it puts a constraint on your candidate pool, and depending on your company and your industry that constraint can be a problem. In 37signals' situation this doesn't matter - they're built entirely on open-source tools, they don't require relocation, they can hire freelancers from a pool of freelancers that they themselves created and fostered and routinely market to, they are so famous and impressive that even in a tight labor market people will line up to try and impress them, et cetera. But there are many companies that aren't 37signals, though the 37signals evangelism team is certainly trying to change that. ;)


We do follow something quite similar. The write up on how our process works is here: http://www.craigkerstiens.com/2011/12/02/how-heroku-works-hi...


I don't understand why people are forced to write on whiteboard. Some get very stressed and lose their ability to think, some can't multitask (paint on whiteboard, think and talk to the interviewer at the same time). Just give them the task, give them a computer, give them 30 minutes of privacy and then let them explain the complete, working and tested solution - you get better results as this situation is more like regular coder's daily routine.


There's nothing more infuriating and insulting to me than being asked to write some lame string sorting function on a pad of paper during an interview. I've been burned more than once because of pop quizzes like these before... lots of missed opportunities because some HR person thinks that writing dinky code no one actually uses in a real project is more important than my portfolio of projects that I've designed, created, and maintained.


Been considering the following hiring technique, interested in thoughts/opinions:

We accept applicants of any experience level. To apply, find an open source project on github, get a patch accepted to it. It has to be of useful scope with a unit test, object-oriented code only. Send us the diff, a brief work history, a reference and why you want to work for us.

If we like it, come work with us for a day, paid at market rates, and we can both assess whether it's a good fit.


Google doesn't ask these questions! It's a fiction that is probably perpetuated because the topic is SEO linkbait.

Again: Google doesn't ask these questions!


I'm always amused by interview questions.

   Primarily because I do terribly at white board problems thanks to disgraphia(terrible handwritting), and secondily because while the questions might give some indicator of whether or not you can code hello world, or a binary tree depth first search algorithem they do a terrible job of testing your actual domain knowledge. 

  For instance, when interviewing for a SDET position i've never been asked a single question on implementing a DI framework, designing custom asserts, designing a test object/mock object, etc. 

  When interviewing for a web development position for high volume sites i've never been asked about cache invalidation, reduccing http requests, designing an ajax event system to reduce unnecsesary calls. 

  Etc. 

  I have been asked a bunch of IQ tests masquerading as mid-low level programming questions, and random trivia questions on language features.


I'm going to have to remember that "disgraphia" word. It's a lot easier to be excused for something that sounds like a medical condition, rather than just saying my handwriting sucks.


Also helps if you were diagnosed with Dyslexia & Disgraphia in middle school and have a paper trial. ;)


Puzzle solving ability is a reliable indicator only if the candidate hasnt specifically prepared for them. Many Indian IT companies in the mid-late 90's conducted exams with difficult programming puzzles in them. This was great for a while, but soon those who wrote these exams told everyone else and the puzzle pool dried up. Future groups scored really well due to being better prepared against a known pool of puzzles. These days most IT companies have moved on to SAT/GRE style analytical problems.

Imagine your luck if you are at a Google interview and already know the Pascal triangle. You can just put up an act pretending to analyze various aspects before unveiling your grand solution.

If companies are merely using analytical ability as a filter, a SAT/GRE style exam will do better because the problem pool is much larger making it less vulnerable to preparation.


We can all agree that there are different kinds of programming. There are some really hard problems out there that need to be solved. There's also the kind of programming that's a matter of getting users to interact with data. When it comes to web application development (my background), I think puzzles aren't the right way to approach the interview. To be honest, it shouldn't be the job of the company to find out what the interviewee does and doesn't know. It's the duty of the applicant to control the interview, come in with their laptop blazing, and start walking through building a web application from scratch or to show off something they are building, talk about the decisions that were made through the process and why.


I've been in a couple of these interviews before and I've also had to try to interview programmers I wanted to work on projects for me ... an interesting approach that I haven't had the opportunity to test exhaustively is to start out like the computer SAT exams.

Let me explain.

On the SATs, they ask you a simple question, and if you get that right, they step up the difficulty and on an on until they discover you're a genius. If you get the answer wrong, they ask you another question of similar difficulty to the last question you got right and on and on.

Now with regards to interviewing, I like to start from the basics ... I'll start out with simple questions about languages they've used, so that they're comfortable. Syntax isn't majorly important to me, you can google it ... however I take note of how comfortable the interviewee is with the syntax of the languages they're using (especially if its one we use). So while it won't count against them, it can count in their favor.

From there I'll ask them more difficult questions. How would you implement this one thing? We have this one problem we're dealing with, how would you solve it? If they can't do it, I'll try another and another, giving them multiple chances to show that they can hang. All the while I'm poking and prodding to find out how they think about stuff ... and what makes them tick.

And then I'll go to the really hard computer sciencey stuff (which bores me to tears to be totally honest. sorry.) ... just to see how proficient they are with it. Obviously a person who eats algorithms and datastructures for dinner in addition to acing all the other stuff is someone I'd love to have if they work well with us on a small project we'll have them do.

I do the same thing with programs they've written, side projects they've done etc. To summarize, I'm just trying to find out where they are on a continuous scale from "total newb" to "could write os x from scratch if they needed to"

Basically, I think brain teasers CAN be useful, but I think we should use them carefully. Value the way someone thinks over just them just getting answers correct.

my 2c


I personally adore riddles and algorithmic puzzles, but I too have had to confront the reality that they simply don't provide much information about the candidate's actual coding talent. Instead, where I'm working, we combine a few different strategies to gauge an applicant's overall potential:

1. Nobody gets an onsite interview without solving a coding problem first. These are designed to take a couple of hours for a competent programmer. For each applicant, we send the challenge most functionally relevant to the position they're interviewing for. Despite the negative comments elsewhere about "take-home" problems, I think they expose a lot of information about a person's coding style and thought process that whiteboard code does not. For instance, we recently received a solution to a simple graph-related question that was technically correct and usually fast, but contained at least five nested layers of loops and took forever on a certain class of edge cases. The candidate obviously understood the problem, but seemed oblivious to the fragility of his solution. You can't grade for style or really pound the edge cases in an onsite interview, because candidates usually have their hands full enough finding any solution.

2. At least two of the interviewers spend the majority of the time on big-picture, conceptual, and software design questions.

3. I split my interviews between puzzles (the 17-minute bridge, the 200m building with 150m of rope, etc) and algorithmic questions. However, I don't care about whether the candidate answers my riddles correctly or not; I'm only observing behavior and process: in other words, whether they remain calm despite uncertainty, and whether they ask relevant questions that reveal more information about the problem and make observations that narrow the possible space of solutions. (For example, in the bridge question, realizing that the slowest people should cross together is as good as a solution to me.) I do care about their responses to the algorithm questions (e.g., find a subrange of an array that sums to 0), but mainly to determine whether they have a reasonable idea of which data structures to use, and how effectively they detect problems with the rough solutions that go up on the board at first.


Every Time i see these posts on here, I get a sad face. I am a SQL Server DBA and most of these puzzles don't translate well to SQL. That doesn't stop me from trying!


Take a look at Joe Celko's SQL Puzzles and Answers.


I really don't know what s right way to interview, but I do know that you are actually hiring a team not set of individuals. You need to have balanced team (means to be smarter than you on each of the important fronts) to deliver a successful product.

So I guess it might be ok to hire somebody based on brainteaser if you need a sharp, quick thinking person which will be a great asset during brainstorming sessions. I don't know.


It's worth noting that 37signals believes in hiring by requiring candidates work on a 20-40 hour project before making the hiring decision. There approach would be completely unacceptable to a large number of developers, it's only because of their strong reputation and remote hiring that people are even willing to consider it.


I once interviewed for an architect position for one of the "web scale" companies and it was a algo fest all through. All the while I remembered the dev manager (whom I'd have reported to) saying that they were looking for an architect for a new project and the key challenge was learning all the existing components they have, picking the right ones for the job (or proposing to build new ones as required) and then - most importantly - getting it sold to a bevy of other architects and stakeholders.

This was right up my alley as I currently lead 3 teams and we regularly have to deal with architecture and design issues on a large application with tons of other teams involved; and I was therefore pretty stoked about the job -except that the algorithm wall stood between me and it. Not having ever done an algorithms course formally(although I read the books out of interest) I went down in flames; at least I think I did - because I'm yet to hear from them. Wanting closure I even contacted them and asked point blank if I was rejected, but I was assured that I WASNT rejected and that they'd "get back to me".

The process itself was largely the standard "how would you write <algo x> in java", which I did not do very well. One interviewer asked me to implement a Queue using two stacks and maintained that "there's a better way" despite my multiple attempts. I finally asked him if this was an actual real world problem that they faced in their company to which his answer was a glib "No, I was just testing your problem solving ability"

To be fair, however, some of the other interviewers were better.

One actually glanced through my resume, picked on one of my open source projects and grilled me on what I'd do given a hypothetical implementation constraint on a particular piece of the architecture. In hind sight it looked like he was trying to get at my algorithmic problem solving ability while connecting to something I'd built; but at that moment I was so hung up on the problem that that my software solved than how the software itself was built that I ended up frustrating him with my pre-thought solutions. He kept asking for "from scratch" solutions to which most of my responses were "but it already exists in X, so I'll just use it".

The best interviewer was a senior guy who was totally OK with my lack of algo knowledge; and very quickly switched to designing a solution to a web scale problem. He asked me for one solution, picked out the flaws in the solution using Big-O notation and asked me to refine it. This went on until we reached the asymptote that they had actually implemented in production. Along the way, he gave me more of the solution than I did to him, but I ended up with a better appreciation of the whys and wherefores of the problem; and more importantly, problems of the kind.

My general take away was that companies that use algorithm tests make the following assumption:

        algorithm solving ability => general problem solving ability
...and this is obviously true for companies where scale is a fundamental issue, ie , where the scale of the problem has not been addressed by any existing solution.

However, the problem with using this definition to measure candidates are:

1. Your company problems may be solvable using existing solutions. Then the real litmus test for your candidates is how well he can assess and assemble existing components/frameworks/libraries to realize the solution. This is where attitude matters more than anything else; as does prior work and interest in such work. "What was the best project in your career and why" might be the best question to ask such a candidate.

2. Your interviewers may never connected the LHS of this equation to the RHS. This is where, IMO, questions like "queue using 2 stacks" or "manhole covers being round" come. The interview becomes cargo-cult worship at the Algorithm/Puzzle altar.

So my suggestion to the companies that still have algorithm-based interview processes is: when presenting such a problem in an interview at least tie it to a real-world scenario that you had to deal with in your work place so its not a test of how much practice I've had with Cormen and Skiena (yup read the yegge post).

Speaking of which: Am I the only one who finds all algorithm books as being too prescriptive? They all seem to dwell on the laundry list of data structures followed by the laundry list of algorithms; leaving the reader to figure out the "what to use for what" and the "why" all by himself? I'd pay for an algorithms book in the style of Polya's "How to solve it". Skiena's war stories are an attempt in that direction, but a full frontal attack on A&DS would be great.

Aftermath: I left that interview with two takeaways:

1) I had to shore up my knowledge of algorithms and data structures simply because it was the "basic tools of the trade" and there ARE still problems around that need you to dig that deep in the industry (thanks to the final interviewer)

2) Large organizations have "how to assemble software using existing pieces" issues in their software development, not "build from scratch" issues; but developer(turned interviewer) hubris will always skew towards testing for the ability to build (proxied by algorithm skills).

Either way - 37 signals notwithstanding - it makes sense to catch up on your algo skills :)

Oh, and please, please, please lets do away with the Hollywood principle when it comes to developer job rejections. If I'm rejected, tell me. I can take it.


There are so many different views here about interviewing that it can get quite dizzying.

1.) 37 signals view is that there is no point in asking programming puzzles or brain teasers; this goes for white board questions as well. Their view is that, it's just better to go over real code the person has written, and if they seem like a good fit, then why not try them out.

2.) Cletus's view is that you should ask a "simple white-boarding problem", that would weed out in theory weed out the "bad programmers", with a certain error margin.

3.) Dmbaggett's views is that solid programming puzzles (one's that are usually are take home, or one's that you do at home) are good for attracting programmers that will fit (the role of a computer scientist at ITA). However cheap whiteboard problem's aren't necessarily the best for finding great programmers.

4.) edw519 commented a while back, that written code sample done at the time of interview is best, as it provides rooms for some discussion material that will gauge if the interviewee is a great fit for the role.

[1] http://news.ycombinator.com/item?id=834513

4.)Another article on hacker news, "I Won't Hire You", got pretty much down voted for it's view, that you have to be greater than great to work at Golem Technologies.

[2] https://news.ycombinator.com/item?id=3428567

5.)There was one other article on hacker news; the article mentioned how in the end programming interviews puzzle, didn't matter and the programmers that they hired and did the best, ended up not being the most productive programmers.

6.)An article about hacking the current interview system. Apparently the top commentator timr, really thought that you should grill the interviewee to find out if they are hacking the system.

http://news.ycombinator.com/item?id=3370341

The list can go on and on.

So in the end what is the best algorithm for interviewing a programmer? Are you hiring out of fear, that you need X to do a Y job, but you don't want X to destroy your company? Are you hiring to find a rock star programmer, who can take your company to the next level? Are you hiring a programmer to do a specific well defined job? Are you hiring to fill your ego, that you have the power over someone?

It seems very hostile to get a programming job. It seems that some interviewers want you to come in and contribute from day one, which I have never seen quite happen. Why are some interviewers so harsh and dicks with the interviewees?

If a programmer fails your interview, does it mean they are truly stupid?

Is your interview really a true gauge of IQ? Does your ego play into the interviewing process as one HN commenter wrote

"The biggest problem with anything like this is the idea that 'here is some test of inherent intelligence - I am far better than you so you are inherently unable to do this thing' which is just the biggest barrier to actually trying to do something - if you think you inherently suck or at least are simply mediocre, your motivation to do that thing is severely reduced."

The general feeling I get is that as interviewers, we are trying to put down and ridicule, interviewees for whatever reason did not pass our test. Maybe in accordance with our company's guidelines or our own ego's.

Instead of seeing the glass half full with interviewee's potential we are seeing it half empty, as their ability will never be good enough.

I think with determination and practice anyone can be "great" programmer. It's sad to see many interviewer's don't realize this, cause "they don't have time."


Why isn't anyone testing candidates' ability on reading and working with existing code? I do that all the time with interviewees.

Why I do that? The obvious being the potential hire is required to work within a large code base (often code not written by yourself) and having the ability to read and understand quickly what a piece of code does is a great skill that not many great coders have.


I thought it was because 37signals didn't want their brainteasers splattered across reddit & ycombinator.

Today I Learned ....




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: