Author of the interview question here. I came up with the idea after working at the company for a couple months and realizing that the skill of "diving into an unfamiliar area of the code and quickly figuring it out" was very important for us. Database codebases are large and complex -- so much so that almost every new feature you work on for the first year or so feels a lot like doing the question.
Over the years of evaluating with it, we learned a lot about its quirks (both false positives and false negatives). At one point, someone got so frustrated that they threw their laptop on the floor. Happy to answer any questions about it!
I would pass the shit out of this question because it's right in my wheelhouse, but I don't consider myself a particularly exceptional programmer. I have a CS degree, but I'm positive I'd fail a FAANG interview w/o advance prep. I'm really comfortable at modifying existing code[1], gluing together existing solutions, and know enough C to get through this question.
But the OP says this:
> When you’re maintaining a large codebase, there are always going to be codepaths you don’t fully understand
Ugh... I hate modifying code I don't understand. That doesn't mean I need to understand the entire code base, but if I'm modifying a portion of the code base, I'm loath to touch it till I understand what it's doing.
Also, of all the work I do, I consider this the least expressive of my ability. It's such a mechanical thing to do and doesn't require a whole lot of thinking?
So I guess I can see how this skill—for a job at your company—is necessary but in no way sufficient.
I YOLO’d the Google interview two times. First time I failed. Second time I somehow passed. It can be done. Might have gotten a higher level with studying but I don’t think I would have ever studied.
After interviewing people for a while, I've learned how to suss out what people gain from practicing vs actual engineering instinct
I've passed people who "didn't meet the bar" because I could tell they just didn't practice, but exhibited 4 stars on every "Ultra Instinct" signal. Programming speed isn't important, correctness & habits are what are important. + or - 10 minutes to finish a problem doesn't really matter in the daily job
I've had two memorable interviews where timed coding was part of the interview, and I wowed the management team but did not get a call back due to taking about 10% too long on the coding. This is a good thing, in hindsight, considering the eventual fates of those businesses that hired like that.
It's counter-intuitive not to test for coding at an interview for a developer. But you just can't learn what you need to know, as a hiring manager, from a pass/fail timed test. This greatly informs my hiring process now.
I'm a former FB eng and a Xoogler and have been an interviewer at both places. The recruiter sets the level for your interviews. There's an option for the interviewer to recommend you for a higher level, but there's no incentive for the interviewer to do so and is almost never used. Getting higher rating from the interviews will only make the hiring committee's decision to hire you easier, and almost never affects your levelling.
I've done two onsites with Google in the past essentially YOLOing it (I only study my interview failures because I view otherwise as an inefficient use of my time) - first time did terrible, second time almost passed if I didn't completely bomb my very last session. The second time ended up not really mattering because two different teams in two different orgs for my current non-Google FAANG wanted to hire me after onsites done on back to back days (side note: that was almost 15 hours of interviewing in two consecutive days - that's a lot of time, I only was able to do it because I was funemployed at the time).
I actually appreciate it very much if a candidate didn't study & focus more on giving the best answers to their capability when I interview them - the questions I give them are usually questions that no amount of studying would have prepared them for, so already taking the mindset of trying to respond thoughtfully & earnestly to problems & situations that change on a whim puts them a step ahead.
I studied for the interview I wanted (by being a thoughtful software engineer in my day job) and not for the interview they offered (which would require me to either be a professional leetcoder or some algo/performance expert). If they didn’t want a good software engineer then they’d have to pass on me. I made it clear in every session that I was thinking through the problem, asked good questions, and when there were aspects I could write concrete code for I did. If you score by solution competence and performance I think I aced 2 of the coding sessions and did pretty mediocre in the other 2. My interviewers must have been willing to go a bit off of the default mode of operation as I managed to get an offer.
I don’t know if interview performance has anything to do with negotiating power, but I was able to get damn near the highest possible total compensation my level allows for without a competing offer.
I YOLO’d FAANG interviews twice (The interviews were in cities I wanted to visit, so why not)passed both times and even accepted the offer the second time. In the end I wasn't a very good fit for the company.
What happened if you don't mind me asking? (I might be in a similar situation lol. I just started a FANG on a team of all Indian ppl and I'm the only white dude and I can barely understand the speech much less the code)
I worked with games development most of my life and then went to web development in this change, for some reason I ended up in position a bit more senior than what I was prepared for in this domain. I thought the job would be fine, specially being my second successful interview with the company, I thought their processes meant that I was good enough for the job they offered me.
Is game development as tough regarding WLB as people say? It's always been one of my dreams to work in Game Dev but not sure if I should keep it as a hobby or give it a shot. Are there sane Game Dev co's with decent WLB?
I mostly worked on smaller studios and for the last 6 years of that career I was working in my own company, I only had WLB problems in my first 2 years, but I never worked in AAA
I got a job as a front end developer. We had a ton of streaming data and needed to index it in the front end.
The "right" solution would be to fix it in the back end so that we didn't fetch all that data when it wasn't needed, but that wasn't possible because of the horrific project/product management.
So I had to build binary search trees to index the data so we could work with it fast enough to have a reasonable user experience.
So yeah -- you will need some of this stuff. You're constantly going to be searching for things, reversing things, looking for patterns.
it won't be so clear and abstract, as a leetcode puzzle but the reason why I had to write that binary tree was because whoever came before had clearly never considered using one and was doing everything completely wrong. It was a disaster and made the codebase insane.
If they filtered in the hiring process for people who know the basics then things would have been a lot more performant, and they wouldn't have burned so much time working around the performance issues caused by his terrible solution.
I absolutely agree that hiring folks who are aware of the basics is important -- essentially, you want engineers who are aware of the space of possibilities.
In my view, what makes folks dislike typical coding interviews is that in the real world, what you need is a solid understanding of what algorithms exist and when to use them/what to look for, rather than the knowledge of how to build one on-the-fly.
To solve the issue you described, you don't need to know offhand how to implement a binary search tree on a whiteboard. You do need to know how to identify indexing as a bottleneck, and how to broadly think about a solution. You could then search for indexing strategies and, having studied them at some point in the past, you'd be able to pretty quickly refresh your memory and find the one that's a good fit for the problem at hand.
For this reason, I've always thought these exercises would be much better off as essentially "open book" rather than real-time whiteboarding problems -- because that reflects how engineers actually work. That's also what I've pushed for in my own workplaces, and we've had good success finding talented folks, and heard positive feedback about this aspect of the process.
I would add to this that while knowledge of computer science algorithms and data structures is _absolutely_ useful, being able to implement a binary search tree RIGHT NOW, in under an hour, is not necessary for... pretty much any job anywhere.
I've worked at 75% of FANGs. My method was basically just bone up on algorithms and data structures by going onto the Wikipedia list and trying to implement them on pen and paper. Practice thinking out loud, practice space management(no backspacing on pen and paper). Be honest if you've heard a question before. Know how to analyze runtime of your algorithms.
I chose to interview in python, even though I know other languages better, because it is fairly dense relative to, say java or c++ and easier to write by hand.
I can't reply to the sibling commenters but providing a contrarian opinion. I'm an interviewer and there's no way for me to tell who is just really good vs who has seen the exact question before. Telling the interviewer just gives them information that goes against you so I wouldn't recommend doing this.
I always prefer to say "I've seen a similar problem before, let's see if I can remember how to solve that" or "let's see if the same kind of approach works here".
This is a balanced approach. It's honest: you probably haven't heard the exact question word for word, even if you can't recognize the differences. And it's actually what they're selecting for in the first place. They're not expecting you to invent the concept of a binary tree, or whatever, but to know it exists and how to implement it and to recognize where it might be the right concept to apply.
I've never had any interviewer say "OK forget it, let me give you another question where you'll never have seen something similar".
I was thankfully asked (in the interview, not just assuming I’d been asked the Q before).
Question was: write code to determine if the stack grows up or down. I’d been writing computer games for several years and smashed it (after telling the interviewer that it would be necessarily technically rely on undefined behavior) and the interviewer somewhat dismissively said “you should have told me someone else asked you this question” “What are you talking about? This is just an easy question.”
A good answer would be "no, but when I was working on <game title> which is a game that has to run on many different consoles including <console>, we had an interesting but where <weird stuff happened> and the stack cookie I inserted to test if it's a buffer overflow remained pristine. After a lot of debugging, at the stage where I started suspecting it's a compiler bug and inspecting raw opcodes, I was looking at my debugger and noticed the stack pointer was lower than the stack cookie address and understood this CPU has a stack that grows the other way than I'm used to".
> “you should have told me someone else asked you this question”
At what point do we end up saying "my current employer 'asked' me this question, because it's part of my day to day job..."? At some point you have some experience in certain areas that you just 'know', and it's not some sneaky "oh I crammed leetcode for 3 weeks!" tactic.
Yes, it seems to me that if interviewers are going to be annoyed about this then they should stop using leetcode and/or using generic interview questions.
I got hired from that loop. Same guy challenged my failure to write ifdef include guards on my header file on the whiteboard. My answer of “oh, you’re right; I’ve configured emacs to automatically insert those, so I don’t have to think about it” seemed to more than satisfy him and we ended up being close as colleagues for several years.
It does in this case yes, how would you see it as an advantage? You have to pass X number of algo/data structure, your being honest will only make them find a different question. If you get the different question wrong guess what you are out. It is what it is.
If the hiring process is set to punish honesty, then maybe by being honest you avoid working at places with such a process and consequently with the people who passed it?
Note that I'm not saying that FAANGs are like that (I have no opinion on that).
I am not denying honesty won't hurt, and probably would help you. But again, the interviewer is going to have to get a replacement question...you have a reduced chance of solving it correctly.
I have a question that I've been asking for years that is somewhat domain-specific, but should be answerable by anyone who knows what a set/map are. About 20% of people get past "stage 1" within 5 minutes, usually people with finance/trading experience, and some will say "oh, I implemented something just like this a few years ago." Some people take 30 minutes and do ok, and some never finish. When someone tells me they have had this problem before, I tell them it should be really easy for them then, and we can move on to more interesting things!
For those that DO get past stage one, it is a boss with multiple health bars. We talk about what improvements could be made, then what improvements on THOSE improvements. We keep digging until we are out of ideas. The best candidates are the ones that stump me, or introduce new ideas. I present this problem as a pair problem solving challenge, so there is no one right answer, and has a lot of back and forth.
The whole leetcode approach to interviewing is basically
1. interviewers asking questions that they wouldn't be able to solve
2. interviewees pretending to solve on the spot things they've memorized
3. interviewers pretending to believe them
There is no way for you to tell if someone is really good (as is) vs who has grinded leetcode since the process (which assumes prior preps) a priori discards the first group.
Leetcode grinding is suggested as necessary by everyone, including recruiters (in-house or 3rd party), hiring managers, prospective colleagues. You can see comments from devs at FANGs even in this thread.
I understand that this is a common belief, but I don't agree that it is strictly true; it is contrary to my own experience, both as an applicant and as an interviewer.
Note that we are talking about a particular interview process which uses CP (Competitive Programming). This is common at FAG (Lets exclude Netflix since as I've heard they don't do it) and at companies that copycat the process. Of course, there are many other places that don't do it.
We are? What is "competitive programming", and where did it come into this thread? All I see upstream from your comment is a discussion of Google interviews and large tech company interviews generally. I don't recall anything particularly "competitive" when I interviewed at Google (or Facebook, or Microsoft, or...); they just had me solve problems, as usual.
That's the point, CP != CS (Computer Science). It is a separate discipline/subject with its own trivia knowledge & tricks. CP uses Computer Science the same way as e.g. Physics or Biology use Math. And CP problems are used in those interviews. https://en.wikipedia.org/wiki/Competitive_programming#Benefi...
I guess, it is okay telling your interviewer depending on your comfort level, as most never change the question regardless. That said (and as another commentator points out [0]), if an interviewer asks you call out questions you know of from before, then you most certainly should (unless you can sell the bluff...).
When you’re an interviewer and you’ve asked the same question enough (tens of time is plenty, really) you’ve seen it all and it’s really obvious when someone has seen the question before.
People who haven’t seen the question before always stumble somewhere. There’s something they didn’t notice at first and need to account for, their solution is not structured optimally for the follow ups, they iterate over some possible solutions while thinking out loud to eliminate them etc.
It’s honestly not that hard to tell when someone is pretending they haven’t seen it before.
As an interviewer I find it easy to catch candidates who are regurgitating a memorized answer, but catching people who know the answer in-and-out is really hard. I've had the exact same experiences on the interviewing side of the table as well.
I think interviewers tend to overestimate their ability to catch people who have seen the question before and miss on tons of candidates who are good at answering seen questions.
It's actually not all that hard to stumble on the question at predictable spots. My process for solving a question I've seen before and one that is new to me actually doesn't differ all that much: as several clarifying questions at the start to confirm I understand the problem, then write a quick-n'-dirty solution as fast as possible. In this first pass I usually don't pay too much attention to things like bounds, which you wouldn't memorize anyways even if you knew the solution. Then I run a pass to polish the solution up, present it, then think of edge cases and how I'd test the solution. The only real difference is that I very rarely pretend not to have seen a question (I have done it twice, both in situations where it was clear the interviewer was out of questions and running down a list of things from memory, and I just wanted to humor them). You'd think that I'd be more sure of myself when I actually know the answer, but when I don't actually know what I'm doing I will usually come to the answer as I ask those clarifying questions and start stating the facts that usually score "this person at least has the gist of the problem" points, like "oh this is a graph and we're doing something with distance, let's see if Dijkstra is the right thing to apply".
When a programmer goes through a question fast and jumps right into the solution, there is two possibilities. One is candidate already knows the question. Two is candidate is exceptional programmer. If the other part of the interview doesn't match up to this, then we will assume the first case.
I definitely place candidates who are being honest upfront above others. They will be reliable and trustworthy, which is important quality for programmers.
First principle that every interview guide teaches you is to take time to understand the questions. A candidate who has spent months preparing for a FAANG interview will highly unlikely rush into answering a question, even if they have seen the question before.
One reason for folks to jump right into the solution is the interview anxiety - which is both due to lack of practice and trying to compare oneself to folks preparing for months ahead of interviews.
As someone who has done a lot of interviewing from the interviewer side at my current FAANG, it's usually obvious when someone has seen the question before - in the debriefs, the candidates who were honest have it noted out loud to the debrief panel & it makes them look good due to integrity, and those who were dishonest also get it noted as a significant negative.
Perhaps I evaluate differently than a lot of interviewers, but I'm primarily interested in figuring out if a candidate has the traits/skills that I'm looking for in a potential coworker - for us, the traits matter more than the skills even.
I don't think I've ever been asked a question at a FAANG interview that isn't in one of the (legion of) "FAANG interview prep" books/sites that ex-FAANG engineers and interviewees constantly publish and update.
Your criteria here is likely unfairly dinging pretty much everyone who has done significant interview prep.
But significant interview prep hurts the primary job of the interviewer, which is to evaluate whether candidates would work well within the team/org - I don't mind if candidates do preparation or not, but the point isn't to try to find people who game the process, the point is to find out if a candidate is likely to be successful in the role. The problem is oftentimes it turns out that a lot of candidates who focus too heavily on preparation fail to demonstrate the qualities needed to be successful.
I don't give a particularly hard interview, candidates and interviewers who pair with me are all pretty happy with the session & almost always leave relaxed, which is usually targeted towards a certain set of leadership traits (and occasionally I'll do coding screens as well although I've been put on those less in general). My interview feedback also has generally been corroborated by other interviewers in debriefs as well, so it's not like it's in crazy land.
FAANG recruiters literally send the candidates emails that say things like "you should study Cracking the Code Interview beforehand". No, they aren't supposed to, and no, the incentives aren't set up correctly to prevent it (I've received these emails from Google, FB, and Amazon recruiters in the past).
We can't really go around penalizing behaviours that the recruiting side of the house is actively encouraging shrug
It's better to fake having not seen it than failing an unseen before question. No amount of honesty points will save you from failing a question.
Sure if you are good enough that you pass unseen questions regularly go with honesty. If you are not, better to fake it. If FAANGs have problems with candidates knowing the questions maybe they should think of different interviews?
My coding question I give (in the occasion I am called upon to ask one to candidates) typically is not particularly unique, but practical - if someone wants to study for it, it doesn't really give them a notable edge given all the possible branching questions. The most notable tell if people did see the question prior though is if they jump right into talking about the problem without asking qualifying questions, implying that they are quite familiar with the problem & don't need me to qualify it & don't think to verify scenarios with me.
For my non-coding problems, I just create it from scratch depending on the position/needs & spend a bit of time navigating the scenario myself and store the question in my notes.
As to failing a question, failing a single question isn't necessarily a deal breaker in itself - it's showing a pattern of not meeting the bar that is. I may rate someone a 2 out of 4 if they didn't go into sufficient depth in a particular question I asked, but I probably won't stay in the way of hiring them if they did ok otherwise and that failure was just an aberration. Loss of integrity is perception that is likely to sour people on any upside of hiring though, and overcoming that bar is incredibly difficult - if someone is clearly rehearsed on a particular question and is dishonest about it, they're probably not getting a 3 or 4.
I've failed enough interviews to know that failing a question is almost always a failing interview. There are enough candidates that will get the question right and one of them will be hired, not me. Honesty aside.To be clear I am talking about FAANGs or famous startups that get hundreds of candidates per position.
I'm a senior engineer at a FAANG who is close to reaching staff by promotion - I've conducted enough interviews over the past 5 years at my current company to at least understand how my org operates.
How the process for us works is if you are a strong enough candidate, we have no qualms giving multiple offers simultaneously and working out the reqs afterwards, even borrowing against a future one if we have to. How we evaluate is probably also a lot different & more thoughtful than a lot of candidates realize - we're discussing leadership traits, strengths & weaknesses, and skills in our debriefs and what we all observed about the candidate in our sessions. No candidate does perfect in any given session - even candidates who I have given 4s for have slipped up or had negatives observed.
I think you are an outlier then. What leadership traits do you discover when forcing candidates to do BFS on a binary tree or similar questions?
Why would you take someone who said he knows question A and then moved on to bomb question B when you have 30 candidates who solves question A? Ok no one does perfect but bombing a question seriously hinders your prospects. If you see a question you know or semi know you gotta be very silly to say it. People don't drill Leetcode to say oh sorry I saw this one already. Nope. In fact even remembering hundreds of questions and being able to solve
them under stress is hard enough.
Many FAANG candidates are just brilliant and don't really need much preparation to get accepted. But others are normal smart but spent months preparing by going through algorithm questions. It's quite certain they come up with 1-2 question they have seen before and this enables them to pass. If it wasn't the case no one would subscribe to Leetcode....
My org is generally anti-leetcode so I can’t speak for your experiences - the only data structure/algorithm questions you may encounter in an interview with my org is likely practical questions (i.e. problems we have had to solve on the job).
I’m usually not even asking a coding question in my session - I set up a practical/common problem beforehand and we explore the scenario together. I can assure you that many candidates don’t pass my session necessarily, even if they have proven in other sessions to be brilliant coders - I’m not looking for technical brilliance in most of the interviews I give, and neither are the hiring managers I work with. To me, focusing on the coding is most important on the technical screen, not the full panel - once you reach the full panel, your goal is to demonstrate technical leadership, which includes expertise in knowledge, coding competency, focus on UX, and some other areas for more senior roles (conflict management, responsibility, navigating different stakeholders for product/project decisions, etc.).
If all your focus is in is just the coding questions, you’ve likely already set yourself up for failure.
I don’t work at a FAANG but I hope to one day. But I do interview for Data Engineering roles from time to time for my and other teams. I think by saying you’ve seen this problem before it shows honesty. It shows character. Those are points that, at least for me, are positives and I like to hear it. I’ll still go ahead with the question just to make sure but if something typically takes 30 mins and you finish in 10 then I’ll move on to a different question to fill the gap.
If this is the first time, at the same company, it (probably) does not matter, that much.
When I was an interviewer for a (second) technical phone interview at Google, one candidate performed pretty bad at one question. Later that week, or maybe the following week, that very interview was in the pre-HC meeting and one of the other pHC members pointed out that I had repeated a question from the first phone interview. At which point I pointed out that I had not been provided a list of previously asked questions, the candidate had not highlighted it, and the candidate's response was still sub-par.
Not mentioning the repeat counts against character and responsibility.
Not performing better at a repeat question a week later counts against competence.
That was an easy "let's not bring this candidate on-site" decision.
> That doesn't mean I need to understand the entire code base, but if I'm modifying a portion of the code base, I'm loath to touch it till I understand what it's doing.
This is also me. I get flack for spending too long on what seem like "small" fixes but I can't be sure (especially in spaghetti code) until I've dug through it. On a couple occasions, different companies unfortunately, I caught flack for spending too long trying to understand critical finance code. As in, how we were billing every single customer. But it always yielded results, and a week later I'm in a meeting explaining how abc original implementation would have broken xyz, but the scowls never really go away. It's frustrating.
This is why diversity of thought is important in teams. Some people will be the “ship, ship, ship type who make sure that the team is meeting its goals. Some will be detail oriented, may frustrate the first type but will ensure the team is producing quality results. Some people like to be on the bleeding edge and bring you techniques to the team. Others like to clean up tech debt or bring incremental improvements to old solutions. All of these are valuable even if they at times frustrate each other.
These are generalisations. There are likely far more “types” and different people will act as different types in different situations.
Having a mix isn’t sufficient though. The team also needs to find ways to prioritise and manage conflict to get the best approach in place for any given situation.
I'm in the same camp, I don't even understand why programming is done under constant artificial time pressure, why are programmers constantly hounded and stressed for no particular reason? Nobody is really waiting for this task, it's an old well-known bug, or just a new additional feature but OMG HOW LONG, ARE WE DONE YET!??
Edit: same thing in this interview question, the whole point is super quick performance and working under time pressure, and making changes to code bases that you don't have time to understand, why?
You have a very good point there. Deadlines in the real world are often fairly artificial. So, why do we put people under extreme time pressure to perform in interviews?
It depends on the company. Especially in smaller companies, the success of the manager is very much tied to the success of the whole company, but yeah, in bigger enterprises it's often like you said.
I have found that sometimes (!) imposing an artificial timeline can bring the whole team together if done correctly. I've been in situations where projects weren't finished for a long time, everyone just cruised along, because there was no timeline and time to market was "when it is done". And there were always more important things to ship. This is bad for company and everyone involved... Just ship it, then iterate.
This is okay and to be frustrated at it, but it's important to recognize why it's happening.
FAANGs go through an incredible number of candidates with only a few slots (comparatively) and the point is to simply make the bar higher and higher. Just like a pragmatic engineering - you have a 'good enough' candidate set.
For me, the problem is when smaller startups copy the format - expecting candidates to jump through all the same hoops. If a FAANG has 1:100 position to interview candidates ratio, a startup will be lucky to have 1:5 (it's incredibly expensive/time-consuming for a startup to interview).
Running the same test likely means you're rejecting a lot of potentially good candidates who didn't want to go through a 'FAANG' interview.
If any of the candidates they are refusing are capable of doing the daily work at a FANG but can't pass the interview, they're artificially constraining their supply of candidates and increasing the cost they need to pay for developers for absolutely no reasons.
The type of work in a FANG is mostly not that dissimilar to other companies (the exception being teams working with machine learning, performance optimization, dealing with tricky scale). I understand hiring specialists for specialist jobs (and I still wouldn't test them on leetcode; ask domain specific questions).
> It's just ego driven over spending on developers.
It's worth considering that engineering interviews are largely conducted by other engineers, and engineering hiring bars are largely set by senior engineering staff. They have a pretty vested interest in maintaining their own status and (relative) scarcity. Constantly raising the technical interview bar keeps engineers a supply-constrained resource...
They artificially restricting the supply of viable candidates, but this gatekeeping happens whenever engineers want (sometimes subconciously) to limit the competition and protect their high wages.
> they're artificially constraining their supply of candidates and increasing the cost they need to pay for developers for absolutely no reasons.
The incorrect hidden assumption is that by decreasing your own supply the value goes up.
It's well known that turning down candidates or refusing to interview them sends the message that their skill are less valuable. It's HR management 101.
The same goes for "anti-poaching" agreement. If FAANGs don't hire from each other they reduce the number of high-paying employers willing to hire the average FAANG employee, effectively reducing average salaries.
FAANGs go through an incredible number of candidates with only a few slots (comparatively) and the point is to simply make the bar higher and higher.
This doesn't fit the "we can't find candidates, so we need more H1B's" narrative.
Supposedly, there are many open slots, unfilled, so purposefully failing people which can do the job, which meet the minimum requirements, should not be the outcome.
Instead of raising te bar, as you say, due to candidates o'plenty.
> This doesn't fit the "we can't find candidates, so we need more H1B's" narrative.
I disagree. In fact, this validates it.
On the basis of a world-wide talent pool vs a domestic. If you know there are stronger candidates elsewhere then you absolutely want to test away the domestic candidates.
Thinking about it from a hiring manager's perspective. You don't lower your standards just because a specific pool of candidates can't meet your criteria when you can widen the pool - as far as you are able.
If the H1B candidates were tested on a 'easier/weaker' test the I would 100% agree with you, but I'm assuming things equal here. And, ignoring any thing related to domestic vs foreigner workforce, "people taking our jobs" debate.
The bar is higher than needed to fill the positions (several people in the thread say they work in senior positions at a FAANG and wouldn't pass the interview). So good enough candidates are being rejected, so they shouldn't need special visas for people from abroad that are intended for real labour shortages, it would seem.
They wouldn’t pass the interview *without studying. And studying is expected for these interviews. That doesn’t mean the bar is to high. It just means the bar is an indirect metric not testing exact job experience/knowledge.
I think "necessary but not sufficient" is an accurate way of capturing our goals for the interview question. Some really great candidates felt just like you do about the question, and we learned over time how to present it to senior candidates in a way that did not (at least in my experience) trigger that reaction.
That said, there were also plenty of candidates who came to the question guns ablazing with the same attitude, and then failed miserably. I'm not saying that's you -- but rather that it worked out in practice to be a fairly good filter for whether people actually had (enough of) that skillset.
> but rather that it worked out in practice to be a fairly good filter for whether people actually had (enough of) that skillset.
How did you rate your negatives to judge the filter was correct? You know the "Positive" status, you know your true and false positives from the interview question (People you hired that performed well or didn't. Or said another way, the predicted value was positive while true value was positive or negative).
But did you know your true and false negatives (the people you didn't hire that would have been good or bad. Or said another way, the predicted value was negative but true value was positive or negative)? Did you know the candidates you rejected who would have been good at fulfilling the roll at your company (False Negatives)?
To be clear, I don't know the false negatives either and the interview question may be the best litmus test for the behavior desired, minimizing false negatives and maximizing true positives. But I just don't know for sure if the question did work out in practice to be a good filter of people with that skillset without a true negative dataset. The Precision is good, but the recall/sensitivity may not be. Then again, you probably know how the candidates you hired performed before you used the new question and after - which may be an indicator on false negatives.
Also, that does seem like a really fun interview question to answer for me. Kudos.
EDITED: Changed the question at the same time the response came from OP. Original post asked if they hired those who failed the test to know more about the negatives. They did.
In my team I need someone like you to deliver features, not someone that can leetcode. I don't remember the last time I had to check an algorithm in Wikipedia. Probably 2-3 years ago.
Most of the time it will be debugging some arcane crap or adding to an existing codebase, anyway.
Algorithms are funny this way. Most of the time you don't need them, but when you do (and if you don't, you can't tell it when), it is a world of difference.
Still, just knowing they exist and how to formulate the search is usually enough.
I don't know if I would have passed the exact question, but something like this is also in my wheelhouse. The other thing that I value very highly is consistency in any given code base. However things are already done, is how I'll keep doing them, unless there is a good reason to change. And to your point, that reason cannot be I don't understand the code. IMO, too often people want to make big changes w/o understanding (b/c understanding can be hard), and end up missing the reasons why the code was hard to begin with.
> This challenge is particularly well calibrated for an interview because there is only one correct answer: “change bool incr to int opcode” (or anything isomorphic to that). The codebase and problem statement together very clearly imply that there are currently two arithmetic opcodes, and your job is to extend that to three arithmetic opcodes. [...] Cloning even one of these functions is probably suboptimal, but I might spend twenty minutes before realizing that.
I just completed it sans writing tests (took ~30min), considered and discarded that approach, instead duplicating the code-paths (leaving incr/decr untouched and instead making mult_ versions of everything). I did reuse the delta_result_type, though.
Briefly my reasoning was that this would make the fork easier to maintain against hypothetical future upstream changes and keep the logic simpler, while guaranteeing that I didn't miss to implement any particularity - compiler would complain about missing constants or functions as long as I covered entrypoints.
A bit of rule-of-three thinking: If in the future there would be even more arithmetic functions added, maybe it would be prudent to refactor and generalize parts of it? But not at this point.
Curious to hear if you agree with OP that my approach was incorrect or with me that it's equally valid (:
It's equally valid — your reasoning exceeds our expectation for the question. Essentially we're trying to test for whether you can understand at least one correct implementation.
Whether or not your approach is "better" depends on factors that aren't part of the problem statement, so it would be a mistake to assess it as incorrect.
> instead duplicating the code-paths (leaving incr/decr untouched and instead making mult_ versions of everything)
If there already are two arithmetic opcodes, shouldn't both of them already be handled by the same code, only with an operation parameter? In that case all that should be necessary is the handling of that parameter in the place where the operation is actually performed, and you only add one more value of that parameter.
Having said that, I haven't looked and memcached yet how it actually implements these operations. I'll have to do that.
That's the problem: Since only incr and decr exist, a bool is enough to say which of these two operatios should be done. But when you add a third, the bool isn't enough. So you either change the opcode to an int, like the autor, or branch the mul instruction away above like above solution.
1. If you don't do arithmetic on it and it's not a primary key then it's not a number (eg an employee number might be "123456" but it's a string not a number); and
2. It's almost never a boolean; it's an enum.
I've lost count of the number of times I've had to change a boolean to an enum (some of which I created in the first place).
My favourite hack for this is when someone decides to add a third value to a boolean with:
Optional<Boolean> foo
Nope. You're wrong. It's even more hilarious when they add a fourth value:
Yea, I would prefer an enum over an int to choose the operation, but don’t forget that this is C, where enums _are_ ints. Oh well.
Also, that it turns out there is already an enum to extend for the binary protocol, so the blog author reused that instead of making a new one just for the this one function.
I would lean this way - I'm not a software developer by trade but I can cut and paste and look at compiler output.
Changing a bool to an int is the kind of thing that I would worry would have unknown side-effects somewhere, whereas adding new code paths is unlikely to explode the existing.
Upstream would be a consideration, and potentially seeing what kind of a patch they'd accept.
Never worked at a DB business, but feel confident that
> "diving into an unfamiliar area of the code and quickly figuring it out" was very important for us.
Universally applies to all software jobs.
What's I find interesting (based on my own personal history) is not only the ability to solve a task in an unfamiliar code base, but also do so without creating side effects (like say quadrupling the size of a binary since you incremented a poorly named constant that is reused as a size of a memory allocation by multiplying itself with itself).
I dunno, I've worked at a bunch of places where it seems people have gone "I'm not bothering to understand the old code, I'll just replace it with <framework/technology du jour>[1] and GOOD LUCK MAINTAINERS."
[1] Sometimes they write their own frameworks. These are almost always the worst.
I guess it seems like you kinda chose this question by chance, but for new similar types of questions, do you now do them yourself (or get a coworker to try) to get at least an n=1 sample for how long it ought to take? I like to give a 2x margin of time over my / a coworker's time, so anything that takes us over 30 minutes is right out or needs to be simplified when we only get an hour with the candidate. An experience with an intern at my last job who struggled with our large codebase (mostly in the way of being comfortable despite not having or being able to fit everything in working memory all the time) led me to conclude the same as you, i.e. it's an important signal (at least for our company's work) to be able to jump in and effectively work in a somewhat large existing codebase.
I'm amused at the comments suggesting this is too easy (or your own suggesting the same for redis), I think if I tried this I would have filtered almost everyone, at least if they only had an hour (minus time for soft resume questions to ease them into it, and questions they have at the end). So many programmers have never even used grep (and that's not necessarily something to hold against them at least at my last job where the "As hire Bs and Bs hire Cs and Cs hire Ds" transition had already long since occurred). I've made two attempts at the idea though by crafting my own problem in its little framework, the latter attempt I used most involves writing several lines of code (not even involving a loop) into an "INSERT CODE HERE" method body, either in Java or JavaScript, and even that was hard enough to filter some people and still provide a bit of score distribution among those who passed. Still, I think it's the right approach if you have a large or complicated existing codebase, and even in the confines of an hour seems like a better use of time approximating the gold standard "work sample test" than asking a classic algorithm question.
Yes definitely -- although it's pretty crazy how much not being under pressure affects your ability to complete the question in X minutes. I've learned over time that good interviews take a lot of experimentation, and the best thing you can do is test/refine questions in real interviews before you rely on them as a signal to make a hiring call on a candidate.
I wouldn't necessarily consider these questions a substitute to an algorithms question, but rather a way of obtaining a different and important signal. An algorithms question may be a valuable signal too, depending on the nature of the role.
This seems so much more relevant than what FAANG companies are asking apparently (never did an interview with them), ie undergrad algorithms and data structures problems, but in a really tricky way. I wonder why your approach to interview questions isn't the norm.
At the time we debuted the question, it was actually a recruiting advantage for us. People would interview with us and other "big tech" firms, and get excited about the opportunity because we asked this question.
I was asked this question when I interviewed in 2020 and finished in about 35 minutes without any C experience. I can confirm that this question made me very interested in working for MemSQL/Singlestore: I loved the experience. The next step of the interview was to implement a pretty large concurrent program. I was given conflicting/very little information from the recruiter on how to complete the assignment, which I thought was odd given that in my estimation this was far too large for an interview question. I submitted my work after a couple hours of effort in an incomplete state because I decided I didn't really want to work for the company anymore. I never heard back. Really soured me on the company.
I definitely appreciate the intent of it, and think measuring along that axis is incredibly important (and rare). But as someone who continually struggles more to dive into some codebases than my peers, I can say that this task is far too easy and wouldn't help you gauge my ability to do that at all. Something that requires implementing a new feature that isn't a near-mirror of an existing one (ideally one that requires a bit of thought at the design level) would probably be a much better measure, but I don't know what that might be for something like memcached.
I think you'd be surprised how many candidates you'd be able to filter out using a very easy task, despite the great credentials touted in their application.
That said, I don't think this is a VERY easy task, per se. And even if it were, there's achieving the stated objective, and then there's you taking the ability to demonstrate some additional skills. I'm in the Java world, and whenever I requested candidates doing exercises, them building a solution with no unit tests, or at least not voluntarily acknowledging that omission, would be a red flag.
And then I'd ask: let's say this wasn't an exercise, what things would you add before shipping this to production, which would be a great conversation starter around (again) testing (second chance), observability, logging, and so on.
That's a great point! We didn't actually think of this question as much of a "measure" (and I think the blog post alludes to this as well with a "Fizz Buzz" analogy). We thought of it more like a filter — if you couldn't pass, then you _may_ not be adept at navigating through a database-y codebase at the pace that was necessary (at that time) for the work we did.
We had other questions (which I won't spoil) that tested more complex system-design skills, without having to implement them within an existing codebase. We felt this provided clear independent signals (ability to "hack" within a codebase and ability to reason through complex system design) vs. conflating the two.
As the team grew, and we needed folks who were much better at one or the other, this helped us round out the team with folks who excelled at either skill.
It depends on the project, but it actually can take 2 or more hours.
It can start with getting it to compile in the first place. I've spent days on building C code bases, getting the dependencies there, everything in place, then you wait, then there is some error, then you google, then you don't find anything, then you debug a makefile of one of the dependencies, etc etc. Some projects don't compile with packages from apt-get at all. Take most Rust projects for example, unless you are using a rolling release distro, your rustc version is too old.
Then you need to look at the quality of the code base. Sure maybe it's just a simple else if chain and one of them has an add and sets result = a + b; inside. Then you can copy it, modify it for multiplication, done. Trivial!
But maybe it's done via a plugin system and spread over 5 different components, and there's actually two plugins you have to add, one for the repl, one for the internal handling, and there is non trivial communication between them. So you copy the 2 thousand lines of plugin code spread over 8 files and there's a bug. How do you debug it?
You also need to be able to start the thing. Maybe it's not meant to run on developer's laptops but instead is deployed in the cloud and you have to use some unfamiliar tooling to access it?
Now, apparently in this instance the project was easy to compile, modular enough (and also not too modular) for the position needing the change to be identified in quick time, and easy enough to run and test that you could also explain your solution in one hour. But this is not guaranteed.
Any interviewer who designed a question with those kinds of pitfalls in it has done a bad job. This question is good partly because the memcached codebase is itself good. It’s not so abstract that the candidate needs to chew through a whole class hierarchy or plugin system in order to make changes, and yet it is complicated enough that the will have to make important choices, particularly with how to handle a new type of operation.
> But maybe it's done via a plugin system and spread over 5 different components, and there's actually two plugins you have to add, one for the repl, one for the internal handling, and there is non trivial communication between them. So you copy the 2 thousand lines of plugin code spread over 8 files and there's a bug. How do you debug it?
In enterprise Java, there is almost certainly a crazy amount of indirection, both static and compile-time.
A trivial task like "Write an API to retrieve a certain set of records from the database, and return them to the client caller" can involve:
1. Figuring out how the framework calls your API, and where the parameters are in the call. This is specified in some config (yaml/xml/whatever) file somewhere that the framework reads so that it can validate the parameters before it passes them to your code.
2. Figuring out the config-file syntax for specifying the validation criteria for each parameter and for the return value.
3. Determining if the specified list of columns is already encapsulated in an existing class. If it is, you're luck, just instantiate that class with the correct criteria and read the fields into the instance you will return to the framework.
4. If no existing class satisfies your needs (99 times out of a 100, there won't be), you realise by looking at similar existing API code that you need to create a application-logic level class that contains fields with all the values you require.
5. You do #5 above; then you read the existing API callchain again, and see that all other application-logic level classes don't talk to the DB directly anyway. They go through a DB-level class. This is because at some point in the future someone may want to swap out the underlying DB with another DB.
6. The DB-level class is tied to a specific ORM (say, MyBatis). It uses classes generated by MyBatis. You write your DB-level class to use a MyBatis-generated DAOClass.
7. You then dig into the details of the ORM, figure out which config-file (xml/yaml/whatever) to modify, how to write the template that contains the specific parameterised SQL statement (if using SQL), how to specify to the ORM that each column be matched to a specific Java type, and how to convert it if it is some other type.
8. You Aren't Done Yet! For each of those classes you created (the API handler, the Application-level class, the Dao class and maybe the generated classes from the ORM, you need to write unit-tests! The unit tests, in this simple example, will easily be twice as large as the actual new code written/generated.
9. You're done, for now, until integration testing in a pipeline fails. But that's a different issue that will always exist regardless of language or development methodology, so that one gets a pass.
And, believe it or not, the above is actually a simplified version of how it actually gets done. For example, there's going to be more lines of code mocking an instance in the tests than there are lines of code implementing the class itself.
Also, the framework calls your API, but your API probably doesn't directly instantiate the Application-level class - it hands it off to a 'call_handler' type of class that performs a threaded/sleep invocation[1] of an actual 'call' method in your API class.
Also, anything your class may need (instances of other classes that perform non-local stuff, like recording metrics, logging, lookups, etc) will be injected at runtime, and so you cannot simply declare an instance of the (for example) LookUpAddress class - you have to request an instance of that class from some sort of dependency mapper or injector, which will be provided to you when the framework calls into you.
This is normal. There are various reason for every single step above, and large companies (FAANGs, for example) will adhere to every single step (and more, like using classBuilders) because it checks all the boxes except velocity.
A startup, OTOH, had better forgo all of the above; create a monolith with no runtime injection or modification and skip all the unit tests (use more beefed up integration tests, for example). A startup can't afford to spend a day adding a single simple API, when, in the same time, they might be adding 25 new simple APIs.
There's no point in all of the rituals if the startup finds out 5 months in that the product doesn't have market-fit. It's better to determine market-fit in the first month or two.
Having paying customers is more important than having dependency injection, or the ability to switch databases.
[1] When not using Java, the 'call_handler' will use some sort of async/await call to do a proper async request.
The blog post is where that duration came from; I think both the asker and the question author missed the first part of the line from it I quoted.
If you search the rest of the comments here for "three hour" there's quite a few people jumping on that rather than just accepting it as "eh, I don't remember so I'll just throw something out there".
This was a fun distraction to start the morning with, and I like that it is testing a set of directly-useful-on-the-job skills. It feels well calibrated in terms of difficulty.
Personally I feel this question ranks low on a quality scale. Too many moving parts and too many variables.
There’s a million different reasons for why they could bomb it and a million different ways they could succeed at it and none of those reasons might be evident to the reviewer.
I don't know. You seem, like others in the thread, focused on "implement 'mult' as age times parameter value", but the goal of the exercise seems much more around: what's your process of going into a code base, asking the right questions, making some good observations by yourself, and not getting stuck. It'd be interesting to hear from the original interviewer if they even cared about how well the solution ended up working, or if the candidate's ability to demonstrate understanding about the (presumably unfamiliar) code base was more important to them.
You hit the nail on the head. We explicitly did not care about how "well" the solution was written (some solutions were very surgical, others involved a lot of copy/pasta, some folks wrote very detailed comments, etc.), although we did point out and ask you to fix solutions that were incorrect. For example, some candidates implemented "mult" as "add" in a loop, with the row lock acquired and released in each iteration. This solution is vulnerable to a data race if another user concurrently updates the value.
It seems a simple way of building the "mult" solution as a composition of the existing primitive(s), here "add". But agreed, that I would not find the tradeof acceptable either.
Yes - so the safe thing to do is to put the lock outside the loop. Anyone experienced in multithreading will instinctively know this is the correct alternative for any codebase (and it performs better too).
I agree, there's a million things that can go wrong just in installing, setting up and compiling the project. I see losing a lot of good candidates here because they happened to have the wrong version of some library installed..
This type of question definitely hinges on having a pre–prepared build environment. The author of the blog even noted that getting the build environment set up was not part of the test.
False positives: we hired a few folks who could prototype changes but struggled to ship them. This question wasn't solely to blame for it, but it wasn't enough of a signal on someone's ability to _thoroughly_ work within a large codebase.
False negatives: senior candidates who were very used to a particular programming environment (e.g. at Microsoft) and didn't have side projects that kept them up-to-speed with the basics of editing code on the command line over SSH. We did a lot of work over time to setup alternative environments for these candidates.
I'll add to your false negative case. I only write code over ssh in mostly emacs (and sometimes vim).
A couple years ago I was asked to interview via CoderPad and while I appreciate their attempt at having emacs and vim bindings, the fact that they're not actually accurate was worse than using Google Docs.
That is, it's actively worse to be almost your editor of choice than I think it would be to obviously be a "clumsy editor". In the former case, the interviewer perceives you as poor at writing code when you're fighting with a slightly drunk editor, while in the latter (like at a whiteboard) the interviewer adjusts for the environment.
Edit: the interviewers let me retry via projecting my screen while using emacs locally in a terminal. I hope CoderPad has improved it's key bindings since then, but I hope interviewers test out these editing environments before assuming people would be 100% proficient.
For my part, when someone's looking over my shoulder, I get too nervous to rely on basically any editor functionality except cut/copy/paste, undo, and save. I start second-guessing everything. So you may as well just give me Notepad, because I'm gonna look like an idiot in any editor when someone's watching and I'm not already quite comfortable with that person (and the inherent scrutiny/judgement aspect of interviews makes that impossible, in that context).
That's very valid. I was actually only thinking of having someone metaphorically looking over my shoulder, like in a remote interview, but literally having someone in the same room watching is even worse.
To me, purely as far as the using-my-editor part of it goes, it's a lot like having practiced an instrument to an OK level of skill, but only ever playing it alone, then being asked to perform for others, who will make decisions about my future in part based on that performance. So I fall back on playing "Mary Had a Little Lamb" because I'm too worried about screwing up in some stupid-looking way if I attempt something fancier, but that also makes me look like I'm bad at it.
Where in the interview process was this question used? Were there other programming and/or system design questions before and after? Was this a senior-level candidate question?
We experimented quite a bit. Since it was relatively inexpensive to administer (almost anyone could ramp up to conducting it), we were able to test it in a variety of scenarios (even for intern candidates). It ended up being _after_ the initial phone screen, but early into the on-site. There were several other questions that tested other skills (system design, algorithms knowledge, reasoning about concurrency, etc.). Every engineering candidate did the question, regardless of seniority, at least during my tenure.
I recently got this question for an engineering manager position. I got it done in an hour and 20 minute with all test cases presented passing. But I don't particularly think this is a good question for senior level candidates since implementing is mostly copy pasting existing functions and renaming them and adding a new memcached VERB.
I'd guess senior level candidates should solve it without copying and pasting, perhaps make sure test cases are present for the behaviour before modifications are made, and then generalize the existing functions. Or, state why adding more functionality on the server side for every type of arithmetic operation would be unnecessary complexity, and instead add support for any type of atomic arithmetic operation with a transaction/UDF approach.
I'm curious about how you'd evaluate this author's solution. Where it does rank on the rubric you used compared with some of the other ways people approached the problem?
The rubric was more of a "Y/N" for this question. But we'd refer to the author's solution as "great". Messier solutions would involve a lot of copy/pasta and occasionally inefficient or incorrect implementations that tried to repeatedly add.
I can't remember exactly, but I probably administered the question for Arthur when he interviewed (he may remember). I do (fondly) remember working with him.
The author of TFA mentions these solutions usually acknowledge the need for atomicity, and try to achieve it by reusing memcached's locking primitives. I understand the failure is that either they don't get them right or use up all their time without a complete solution.
To be honest: it's how I would have failed this question.
Empathy and kindness is the top, fundamental value I hold professionally, but I am struggling to understand why you would re-assess after a single incident. If a large fraction of candidates showed signs of extreme stress, sure. One candidate, though, isn't enough to determine anything meaningful, in my opinion. As Mack "Bumpy" Freeholder famously said "one thrown laptop is on them, but two thrown laptops is on me!"
Yes it did, at first. It was one of the first times we presented the question, so we didn't have a great signal on its efficacy just yet. However, other red flags emerged during the interview process, and we decided to try the question out a few more times, with a lot more success. And the rest is history.
How did the logistics of this question work out? Did the interviewees work on the problem live, with you (or the interviewer) in the room acting as a sounding board or as some other resource?
We gave candidates a heads up before the interview that they'd be working on a live coding question, and that they should familiarize themselves with editing code over SSH. Some candidates felt uncomfortable with this, and we were able to make alternate arrangements (e.g. setting up VisualStudio locally on a Windows machine).
During the interview, the interviewee worked on the problem live, and the interviewer stayed silent, except for questions. If they didn't come up for air after 30 minutes or so, I'd check in and see if they were stuck, offer help, etc.
Do you let candidates noodle, on their own, for those three hours?
Worst part of programmer interviews are coding as performance art.
I don't code "out loud". I kind of get lost inside my own head. I can explain myself after I figure something out, not during. If I'm supposed to talk, then I'm thinking about talking, not programming.
Obviously: I've never liked pair programming. Rubber ducking has never worked for me.
I remember being taught pair programming in school and thinking "This is awful". I just can't see how it's supposed to actually work.
Rubbing ducking works, but only has a debugging measure. Explaining what your code is actually doing on a line-by-line basis can sus out the root cause of a bug.
I suppose that's the only time pair programming could work for me; by explaining what each line of code does, another programmer could stop me and tell me "No, that's not right".
I hate to say this, but I would have failed the type 2's as well, in just copying and pasting and changing pluses to multiplies.
assuming signed int16 (for simplicity of discussion), 32767 + 1 is far easier to error handle (a simple check of the max_int value) than 10000 * 4, unless you are just blindly allowing the range of values and don't care about overflow/underflow.
I absolutely agree with the premise but for the same reason it's a valuable skill it's also really hard to time box. I'm assuming you picked this particular task but you had already scoped it out and knew memcached didn't have any particular bear traps that a decent programmer would step in and lose 2 days?
I had never looked at memcached's source code prior to thinking of the question, but we did try it out before (and measure the time) before administering it for the first time.
Love questions like this since they're way more practical than some Leet Code algorithm I haven't seen since my sophomore year algorithms text book.
Looks pretty fun too actually! Nice work and I hope you got some good engineers out of it. Any other retired interview questions you're particularly fond of?
How much searching did you need to do before you found a good codebase to work on? Was memcached the only one you used or did you have a larger set of questions that you picked from?
I got super lucky and came up with the question an hour before I first administered the interview. Memcached was the first codebase I went for, because someone had asked me a question involving it a few months ago, and it was conceptually a "simpler" version of MemSQL.
Later on, we tried to diversify the question to other codebases (because it leaked, although less publicly). I remember trying it out on Redis, which was too easy because the codebase is so nice. We never found a great alternative while I was there (through 2016). Although, the team may have found something since.
At my current company, Impira, we have a similar question on a different codebase :)
It was a lot harder (even at MemSQL) to repeat the success of this question. At Impira, it took us about a month of trial and error over 5-6 similar codebases to find a good (and appropriately relevant) question.
You problem setup seems to assume that your interviewee is compeltely unfamiliar with memcached. So how do you expect someone to be able to answer this question if you haven't told them how to add commands to memcached?
[UPDATE] Ah, I see you actually want them to go hack on the memcached code base. That is not at all clear from the problem description. The language of the problem setup is pretty elementary. I think anyone who is actually capable of doing this task would find the description pretty condescending.
The question wasn't administered with the "script" laid out in the blog post. I understand why if you were just reading that, you might get confused. In practice, we had a script that walked the candidate through how to use the memcached client, explained that the command didn't exist, pointed them to the codebase, and instructed them to add the command within the codebase, so that you can issue it through the client.
Ok, I guess you’re thinking of your shell where you can “add a command” by dropping a shell script anywhere on the disk? Or your text editor, where you can add commands by editing some config file? I guess that’s fair. Note however that each of these programs has bent over backwards to enable easy extensibility. The average program does not. This is more like adding a command to sed or grep, which have no extensibility (aside from editing their source code).
This interview will filter for people who happen to already be familiar with the memcached code base, in which the interviewer will believe he has found his genius superhero.
Luckily I did not have to deal with that during my tenure (I left in 2016). However, we had experimented with other changes to memcached as well as similar changes to other codebases. There is no shortage of OSS projects to draw from. However, calibrating something that's doable in about an hour is really tough.
So you hire people who skip over the depth of the locking model and just copy and paste the addition code, but you fail the people who spend time to deeply understand the locking model but can't produce a solution in three hours?
Interesting approach, I bet your codebase is a thing of beauty...
What frustrate me about these interviews is one need to talk on phone and code at the same time. Who the fuck do that in real life.I am so fucked up with this thinking out loud shit.
It's why despite constantly being asked to interview at FANG I don't, as I just don't feel like dealing with that kind of stress. I prefer to stagnante to being obsolete where I am.
I saved up enough that I paid off my house and I opened a bar. I just don't have the heart for the /r/iamverysmart crowd, that always paints themselves into the same corners because "neat and new", or dealing with more political crap where the customer is never a thought in their equation
Don't worry, there are a lot of jobs that don't do this kind of high pressure interviews. Especially outside of web development there are huge markets of people who are really happy to get their issues solved, and don't really care if it takes one or two days.
I think you might be reading something that isn’t there. Look for comments by ankrgyl where he says that some candidates liked to talk about what they were doing as they did it, and others didn’t, and that it wasn’t really important to them one way or the other. Also, I think he said that these interviews were done in person, and that this replaced a question where coding on a whiteboard was expected.
I would probably also pass this question, but I would reject this company for this type of cliffhanger drama which tells me it's an unhealthy working environment.
Maybe he means the part1/part2 split? Perhaps he misunderstood that this is a device of the blog author, who wants to give his audience a chance to try out the question without spoilers, and that it wasn’t part of the interview itself?
At the time I developed the question, I was employed by MemSQL as a full time software engineer. Eventually, I became the VP Eng. I spent 5.5 years there. It was an incredible experience.
To everyone praising how good this challenge is, calm down. There's nothing great about making somebody sweat for 3 hours during an interview. It's even worse if "someone got so frustrated that they threw their laptop on the floor" like the top comment claims.
Yes, it would be great to avoid hiring people who have a bad temper and tend to throw tantrums when nervous, but, then again, it's beyond me why anyone would need to prove they can cope with such pressure without breaking down. We're talking about a desk job here, which requires writing good quality code according to some requirements during normal working hours, not disarming a bomb with a timer attached to it.
> it's beyond me why anyone would need to prove they can cope with such pressure without breaking down.
Where's the pressure? You have a piece of code, all you need to do is read it, understand it, make a (possibly) trivial modification. That's exactly what you would be doing (or have been doing) every day. You don't have to explain complex algorithms on a whiteboard. There are no high stakes. You can leave whenever you want.
Last time I had to do something like this I was allowed privacy, internet, headphones.
the pressure comes from the context - you're in an interview situation, faced with a previously unknown task (presumably you haven't done something similar before). The result of this "test" determines your candidacy in the job (presumably you want or need this job).
It's like asking why a bomb defuser feels pressured.
> It's like asking why a bomb defuser feels pressured.
The stakes and environment aren't even close for this to be a meaningful comparison. A job interview will always carry some degree of pressure, no matter the circumstances. What I fail to understand is why would someone feel more threatened by this scenario as opposed to, say, answering a barrage of technical question or solving algorithmic problems.
If "3 hours" is "such pressure" for you, then maybe some teams are not a good fit for you?
For example, I have broken the build in the past - and this prevents everyone else on your team from merging PRs. In this case the expectation was that I fix it "soon" - once I finish my current meeting/lunch/conversation.
If someone is prone to breaking down from pressure given 3 hour deadline, they would have a breakdown right there.
I agree with you. Sometimes stressful situations arise, and there is nothing you can do about it.
Even if you have the best QA process in the world, and people can't break the build because you can't merge code that doesn't pass the test suite, at some point you are still going to ship a bug to customers, and even if your manager is the nicest person in the world it will be a very uncomfortable situation once you realise that your bug is losing customer data.
If you can't deal with an occasional stressful situation, life is going to suck for you, because there are always going to be stressful situations.
Being given 3 hours to add a minor feature to a pretty run-of-the-mill C program really shouldn't be an issue to anyone who is familiar with C, assuming the job asked for experience with C.
Sure job interviews are stressful, but if you can't calm yourself down in 3 hours and do something that would take an average C programmer maybe 30 minutes of time, you probably aren't a good fit for a job where people are going to rely on you.
Generally speaking if your build process is so fragile that one person creating a breaking build results in a chain of failures and a pressure crunch then that's a work environment problem. I would consider that sort of thing completely unreasonable and consider finding a new job immediately. If you find yourself in that sort of thing frequently then perhaps you should reconsider normalizing it and instead push for actual fixes.
You look at the issue and say it's a personal issue. I look at it and see a dysfunctional development environment issue. Now this doesn't always hold true and sometimes devs have to be held to higher pressure environments (such as having on-call duties): but those kind of things are usually better compensated for and called out as part of the job duties.
Even in the best environments, stuff happens. If you have a central enough role in a large enough environment, you will cause pain for a ton of other people every once in a while, even if it’s not directly your fault. If you have a process that completely prevents all fuckups that an engineer could possibly cause, you either have a relatively small leaf node project or one that is low productivity.
That said, I wouldn’t necessarily give this question to the most junior candidates. Coping with a large existing sourcebase rapidly is a set of learned skills even if it’s just cargo-cutting a solution like this.
In an ideal world, no one would ever have to work under pressure, and while some people in this industry don't, many (I would bet a majority) of people do or have in the past. This seems especially true for startups trying to break into a market with competition.
Either you have been lucky with the places you've worked at, have sifted through tens of shops to find a good one, or have very little experience in the industry. I really do hope that none of
my assumptions are true and that one day I can find a functional, low-pressure team to work with.
This goes back to what I said earlier: If the work environment is dysfunctional, then you should either work towards fixing it (especially if you're a senior engineer) or work towards finding a better job. Software engineers are one of the few career paths afforded the ability to do so and it would be better for us as a whole if there was less normalization of awful development practices under the guise of 'everyone else has experienced this or does this'.
This goes doubly true for startups in my opinion because rushing things like this and having a shaky foundation just adds another point of failure to your organization. Remember for every startup that succeeds, multitudes of others fall flat on their face due to a build up of problems.
In general I don't do well on interviews because I get nervous.
Having said that, for this specific case, 3 hours seems like a lot of time. All you have to do is find a similar command (probably the increment), trace it in the code (possibly with a debugger and some breakpoints), and copy/paste to add the funcionality for the multiply. Then, see if it compiles, it won't, and keep repeating until it compiles. Then, see if it works, it won't, and keep iterating until it does.
To be fair, in my first job 20 years ago I did this exact sort of thing for 3 years on a very large C/C++ codebase. Also, later when I was at Facebook I did this a lot in that humungous codebase (searching the codebase, finding the right place to make a change, looking for something in the neighbourhood that looks similar and can be copy/pasted).
So this interview question would be right up my alley.
I can only hope that an employer seriously using this in an interview is looking at how someone codes, not necessarily that they get a result in 3 hours; I can also only hope that they explain this to the interviewee. At best, assuming no prior knowledge, it tests an interviewees understanding of the broad structure of a given procedural programming language and their use of whatever tools are available to them on the day (hopefully this at least includes coreutils). It has to be treated as a kind-of-realistic, fun challenge on an existing code base; otherwise it is meaningless.
Stories about someone so frustrated they threw their laptop on the floor are genuinely sad. Like, what are people trying to achieve with a test like this if that is a possible outcome? No one would expect a solution to this in 3 hours under normal conditions, regardless of whether someone can push one out - I got something going in an hour, and I have no idea what the hell to make of the code base now except that a) it builds and b) it works according to the fairly vague parameters of the task, i.e. input `mult age 10` to your telnet session and get 380. Regression testing be damned, especially considering there are no parameters for testing (does that 3 hours involve rewriting a bunch of Perl to support the interviewee's hacky changes? It takes a while to run `make test` ...)
Other commenters are saying that it is outrageous to expect someone to put in 3 hours work, without pay, to produce a meaningful result. I agree, as this is notionally their profession, and people are not generally in the business of handing out their work for free. That being said, I haven't had to deal with Leetcode in my career (yet) but from what I've heard it sure beats that.
> That being said, I haven't had to deal with Leetcode in my career (yet) but from what I've heard it sure beats that.
Lucky you... I'm at my 8th job now and Leetcode-style interviews pushed me to the point where I ask up front if this is part of the interview process, because, as stated on my LinkedIn, "I have a personal policy against any type of live coding or online coding tests during interviews and I don't enjoy or engage in any form of competitive programming". If that's a showstopper for them, then they need to find someone else.
No programming position would ever require me to work in such an adversarial setting. I'd quit on the spot if it did.
Having said that, on a personal level, the issue is related to the long-term psychological impact that such practices have on me. I could write entire pages about how I felt during each of the interviews where I was asked to write code and the countless nights I lost sleep due to that, even though some happened over 10 years ago, but, then again, it will only trigger negative emotions that I'm trying really hard to avoid.
Since I'm usually optimistic and a total people pleaser, it took a lot of effort from my side to say never again to this practice after trying in vain for over 8 years to get into FAANG-like companies and, again and again, bumping into yet another person who couldn't resist lecturing me on how "I'm supposed to train myself for such tests". Guess what? I did waste a lot of time training and still I failed dozens of such interviews. Then again, I always managed to land a programming position where the interviewers didn't ask me to code in front of them or under time pressure. I'm at my 8th job now and, after 15 years in this career, I'm confident that I'll never have to put myself through such interviews again, which I make sure to state up front since it's not negotiable.
Yea, I don’t get it. This isn’t describing an adversarial process at all. There’s not intended to be any time pressure either, since the task is chosen to be easy enough to finish in half the available time.
I can certainly understand having gone through annoying interviews; that’s very common. Some interviews are even purposefully designed to be difficult to pass, which can be very frustrating to the candidates. But this question is just asking for a practical demonstration of a candidate’s abilities. It’s not a trick question, it’s not a puzzle, and there’s no secret handshake.
> Yea, I don’t get it. This isn’t describing an adversarial process at all.
You might not be able to get it because you're not me and you likely didn't experience interviews the way I did. However, that doesn't make my perception of this practice invalid. What I mean by adversarial is that you have odds stacked against you, where one party knows exactly what they need to get from you beforehand and your only option is to live to their expectation.
> There’s not intended to be any time pressure either, since the task is chosen to be easy enough to finish in half the available time.
This is not an accurate description of what's happening in reality. Unless you know fairly well up front what the solution needs to look like, Amazon, for example, leaves you with 20 - 25 min to propose a solution, start coding, walk through your thought process, somehow not get nervous if you realise you went down a wrong path, somehow not get angry if your interviewer is smirking at you behind your back (around 2 out of 5 tend to do that based on my experience), somehow focus on the next round even though you know you messed up the previous one and someone made sure to make you feel small for not being able to reason through all the edge cases when having to implement binary search in a sorted and shifted(!!!) array, because hey, it would be too easy to let you get away with implementing classic binary search, wouldn't it?
> But this question is just asking for a practical demonstration of a candidate’s abilities. It’s not a trick question, it’s not a puzzle, and there’s no secret handshake.
It's asking for a specific set of abilities that one might or might not have and their capacity to exercise those abilities varies wildly based on the interview setting. Putting a time constraint on it, even if it's seemingly generous, changes the dynamics drastically. If you're the kind of person who gets nervous in such settings, you can totally get hung up for a good chunk of the allocated time because of some missing semicolon somewhere. It's very easy to confuse the compiler / toolset and have it produce pages and pages of cryptic error messages. When's the last time you had to read through the output generated by some mistyped Autotools syntax?
> I can certainly understand having gone through annoying interviews; that’s very common.
Exactly. I refuse to subject myself to any of this again and I'm sure I'm not the only one.
You know what I really want to see? We streamlined our firing process so we don't have to go through all these hiring shenanigans. When you have a ridiculous hiring process you can still make a bad hire. At most places I have worked bad hires stick around way too long because of HR and Legal BS. Just give the person a really nice severance to part ways. Be done with it. The departing employee can feel good about it, the team can feel good that the bad apple is gone, and you can focus on finding another person.
What’s your alternative? This is an alternative to leetcode vomit at the least. I agree this question isn’t 100% inclusive but then no other method I know of is; at the least the only thing to worry here is people being frustrated or getting anxious. This can be mitigated by being nice and reassuring them that even a suboptimal solution is okay. As others mentioned, 3 hours is generous and probably accounts for people freaking out and then cooling down.
I didn't comment here to propose an alternative, just to point out that the proposed approach isn't so good for some people, including myself.
Since you asked, maybe people should stop looking for the one true way of interviewing and realise that they're hiring actual humans, not robots. The hiring process in this industry has become this lazy meat grinder which most people with actual decision making power swear by while bemoaning the lack of diversity (and I don't mean just gender). Some day, perhaps the industry will move towards more inclusive approaches.
This is a great interview question I don't want to be 'that guy' by picking at what may be a detail but there is another response to look for from a candidate.
We should think very carefully about adding a multiplication command because it introduces a failure mode that may be unanticipated by the client. Code that previously worked could begin to fail after this command goes into use.
Specifically, if the client needs to revert a series of operations on integers, and if the operations are transitive, there is no need for a mechanism to ensure they occur in any particular order (the usual caveat about working within the limits of precision applies.) This holds true for addition and for multiplication, in isolation, but is not true if they are combined. Change the order and the end result will change. Adding multiplication puts a burden on the client to understand this risk and be explicit in the ordering.
Some people may argue that the client should already understand it and they have a point. We can't defend against every possible misunderstanding. I think there is a good discussion to be had on this question and if the candidate were to go there, it would be a favorable sign of experience on their part.
This is a REALLY SOLID response to the question, and I'd hope it would get some strong 'hire' points.
But, to be fair, there's already technically this issue in the memcached API as described in the post, in that it supports both append and add, and "append 0" (on a value that add could also act upon) is effectively the same thing as "mult 10". If a field contains "1" and successive "add 1" and "append 0" commands come in, depending on what order they arrive in the result could be 20 or 11.
So I think the interviewer would be justified in saying 'yeah, let's assume we've evaluated that risk and we plan on adding some really thorough documentation warning people about that risk, so can you just go ahead and try and implement it?'
But absolutely, this was the thought that came into my mind when reading the spec too. Not enough developers think about APIs in terms of compositional, algebraic terms, and being able to see that adding 'multiply' and 'add' together at the same precedence in an API might cause trouble is a really valuable skill.
This is a REALLY SOLID response to the question, and I'd hope it would get some strong 'hire' points.
I agree -- but unfortunately the interview question doesn't filter for this level of thinking.
If anything, it filters for the exact opposite: your ability and willingness to shove a random new feature into the codebase in 3 hours (or GTFO) -- stability and other consequences be damned.
You can’t expect every question to give you an opportunity to show off every skill that you have. If I were asked this question, I would certainly comment on whether it seemed like a smart thing to do, but I wouldn’t refuse to do it. I can demonstrate both my ability to do some software archeology, and do some coding, and figure out the design of memcached first, and then also mention to the interviewer that I had reservations about the soundness of the operation. The interviewer may even have more questions to ask along those lines, and might be disappointed in candidates who don’t bring it up.
But they would certainly be disappointed in candidates who don’t take the opportunity to show off the programming skills that they have put on their resume.
As long as we agree that, at the end of the day -- it's basically a crapshoot, as to what you can really tell about the other person based on whether they answer these kinds of questions on time and in as much detail as you like.
That is -- I'm not saying these questions don't tell you anything about the candidate. But at the end of the day ... I just don't think they tell you that much.
Certainly not in the divining-rod, "finally I found a question that will sniff out the true h@ck3rz from the wannabe drudges" sense that people seem to think questions of this sort are imbued with.
> “question that will sniff out the true h@ck3rz from the wannabe drudges”
No one ever said that it would. No single question can tell you everything you want to know about a candidate. This question is designed to weed out candidates who talk well but can’t actually do the work. It won’t tell you which of those passing candidates are great and which are merely good; that’s what the rest of the interview is for.
I agree that interviewing is, unfortunately, a “crapshoot” for the candidates. As a candidate you are going to interact with dozens or hundreds of companies, and most of them won’t do a good job of interviewing you. Most of them end up with more candidates than they can really handle, so they end up passing up plenty of good prospects. But I disagree that this question is a “crapshoot”; it gives you specific information that you really want about each candidate, and it does so without a lot of the irritating artificiality we often take for granted in interview situations.
At the cost of 3 hours of the candidate's time (on top of all the other time demands, and to some extent necessarily irritating artificiality of the rest of the interview process).
Let's just hope the total compensation offered is in line with these demands, then.
A lot of commenters have gotten hung up on the three hours, which I think is pretty funny. As explained elsewhere, it was really just one hour and most candidates did the work in half that. I think that this question fits nicely into a standard interview process, where a candidate spends half a day on site (or in video conference) meeting the team and doing interviews with several people.
Because anyone who has a different point of view about things must be... just all hung up about something or other.
Bottom line is -- if you tell a candidate "this could take up to 3 hours of your time" -- then boom, right there, you've asked them to carve 3 hours out of their life (away from their spouse who may be chronically ill, or who knows what else they might have going), in addition to the all the other hours they need to invest in your process before you can begin to take them seriously.
If that's your process, fine -- just be up front please, and own up to it.
I agree with you that interviews which incorporate a larger project that takes multiple hours are usually a waste of time. Usually the project turns out to be too unfocused and too subjectively judged to provide useful information about the candidates. I spent four hours on one once where the only feedback I got was that my solution “wasn’t object oriented enough.”
Having just gone through this question for fun, the codebase DECR command has this clipping code in it to avoid the value going negative:
if(delta > value) {
value = 0;
} else {
value -= delta;
}
A caller reverting operations by sending the same values with the operations flipped around must already keep track that they didn't ask to drop below 0, or they may not get back to the original value.
Plus, the atomic update happens with a lock/release between each operation, so while you might get the same result at the end of your rearranged ordering, clients may see intermediate results and changing the order would change which values they see, which may or may not matter.
I agree it's a great interview question, but I would never ask anyone to spend three hours doing this for free. Someone comes to me and says they can program, and that's that. They have other employers and coworkers in their life, or just teachers and other students -- people I can ask what it's like to work with them if I want, but at the end of the day, I'll know if I've been lied to in the first 30 days and neither of us want that, so I think all I'm trying to find out at the interview-stage is whether I want to spend 30 days with this person, and I don't need to study the outcome of three-hours of them guessing at what I want to help me do that.
But I like to talk about programming, and this question just creates so many different kinds of ideas in my head too. I like how you immediately think about ordering of events and the implications of that. Maybe memcached needs a division operator too. I think that's fun to talk about. Maybe overflows are important; Maybe the task is to add some new types to memcached to protect against that. Maybe we should add some stats to track the number of overflows (or otherwise give some estimate of the accuracy).
Now I am looking at the slide-rule on my desk and wondering what the required precision for the use-case is; That is, is it possible that instead of modifying memcached and having to support your freaky-version forever that you can simply instruct the application to increment by log multiplied out by the desired precision, then reverse with division+exp on output?
And now I am thinking about supportability: Once we've decided what we want out of memcached, is it worth trying to get those changes into the upstream memcached so we don't have to worry so much about that feature disappearing (or becoming difficult in the future)? Talking to people about the social aspects of programming with open-source can be important too.
So yeah, lots of reasons to like this question. I'm probably going to use some variant of it myself, because wherever the candidate goes with it is going to be informative, but I'm extremely disappointed by the rest of the process (and all of part 2); it's definitely not for me.
I don't know the rest of the process, but 3 hours for an interview seems 100% fine. And practical tests are so much better than random abstract questions IMO. I think your solution is to watch and plan on firing many people within 30 days, which seems worse for everyone.
> I think your solution is to watch and plan on firing many people within 30 days, which seems worse for everyone.
Why did you add the word "many" where?
I didn't use it. I can count on one hand the number of people who flat-out lied to me about being able to program and I needed to fire them for it, and I've been managing and developing software for around thirty years or so.
Do you think that's a lot? Or do you think people lying about what they do is common?
Hiring easily with an open mind to firing quickly sounds like a good idea but civil rights law, at least in the US, makes it very difficult. Let's say that "fire quickly" is one of the few women or blacks you've hired. Even worse, let's say that the "fire quickly" contingent is disproportionately women or racial minorities, through no fault of your own. I hope you're ready to pay a lawyer and a decent settlement. It doesn't matter whether you've actually done anything wrong. It just comes down to the economics of the chance they draw a sympathetic judge, the chance a jury will buy their sob story, the amount of money it would cost you to litigate the case, and the amount of money they're willing to go away (i.e. settle) for. Hope no one in your workplace sent a stupid joke over email that will make you look awful in front of a jury. That makes the settlement number go up by a lot.
You can maybe get away with this practice with a small enough company, but even then, I'd advise extreme caution. If you can filter people out before offering them a job, you're in a much safer position legally.
Edit: I think you're in the UK. I don't know much about employment law over there, but I gather the situation is actually better.
> Edit: I think you're in the UK. I don't know much about employment law over there, but I gather the situation is actually better.
I'm in Portugal actually, but I've lived in the UK and the US (I've actually worked there for almost 20 years). In a past life I was hired as the country manager for a British company working out of New York, and I had to receive training on employment law in both the UK and US because they actually take it very seriously in the UK, much more so than in the US; An employee (or former employee) can ask tribunal to decide if their termination was fair, and they absolutely do look for race and gender selection. I can appreciate Americans might not get a very good education on workers rights, so it is perhaps worth mentioning to an American when I fire someone, I'm doing it after I've had that decision reviewed by council, and with that training.
Now when I said they lie to me, perhaps you got some idea that was just my opinion or something. I didn't mean to imply any amount of whimsy and tried to avoid any words that might suggest it; I admitted elsewhere I've fired less than 5 people for lying to me, and that's paper evidence of a lie. I've probably hired hundreds of people at this point in my life, so we're talking about a 1-2% problem where even if we have to pay a six month settlement, that's peanuts compared to what we save:
See, if I had to spend 3 hours for each candidate, and maybe out of hundreds of people I've interviewed thousands, we're talking literal years of my life. I can't realistically do anything else with that time. And not just my life- I have a fiduciary duty to the company I work for, I can't in good-conscience spend the money in the budget for my salary to avoid a much smaller potential penalty that probably won't even happen.
But I think it's important to think about some of the things you're saying: If someone thinks they're doing what I'm doing, and find themselves firing so many people that there is any racial or gender bias in the pool they're firing then I don't think they're doing what I'm doing. I might even wonder if they're racist or sexist myself, because my point is this shouldn't happen often enough to worry about.
The problem is that there's often a gender and racial bias in the pool of qualified workers, but the courts are basically willfully blind to that. Deviation from the overall demographic breakdown is taken as evidence of discrimination, even if those demographic groups have very different breakdowns of qualifications.
you can only edit your comments for a limited duration, like one hour. Otherwise open the link for that comment (on the date) and there should be an edit link next to the parent link.
This is what experience and skill looks like, careful consideration and really trying to understand what you are doing. Too bad it's so difficult to market this in a superhero genius package that managers will buy
There's another potential issue that may or may not matter - since the value is stored as text it probably can't overflow in memcached - but the multiplication COULD overflow internally in the C code; addition could do this also - I would wonder how it's implemented.
Why is the type1 programmer being looked down upon? I mean I am sure that I want to make sure how the lock implementation is. I also want to know if it is modular enough to be called somewhere or is it baked in the add step. I sure do want to multiple times read the add implementation and make sure there isn't a magic somewhere.
And finally this problem looks a lot like can you see a pattern!! I mean sure but a lot of people who might go to interview with standards of competitive programming might take the whole 3 hours because there's a lot to rule out.
I guess this is a fair question with a proper IDE and grep tools. The timing does look outrageous but if you are interviewing at a company which builds databases as a product, you better know how basic operations are provided.
Now that's a compliment. "We couldn't use your code as a basis for interviews because it was too obvious how it worked and how to modify it." Definitely a condition worth striving for.
Yes, type1 is actually better than type2. Type2 is hyper-focus and is just blindly copying and pasting the incr portion as multi. Type1 actually is curious about the code base and wants to explore it a bit, to see how the locking is working and validate its correctness, to explore different approaches to see if incr can be used to implement multi. Type1 will end up learning a lot more about the codebase and do well in the long term.
Definitely yes! Type 2 will get a result out faster, but they’re less likely to pick up on complicated patterns that have emerged within the system, and may not reach a point where they understand all of the parts and how they combine.
Type 1 will definitely take longer at first, but given a few months in your codebase, they’ll be able to make more effective trade-offs and understand the system better as a whole.
Talking generally of course, there are probably some type-2’s that will still find time to prod around, and some type-1’s that will never get faster… but I think someone who doesn’t just assume that their guess about how a system works is correct, will be more likely to produce quality code in the long term.
That was my reaction as well. To me, it seems like you cannot take it for granted that the existing locking, adequate for incr, is also adequate for mult.
It could be that in database circles, nobody ever uses any magic and only very strong locks are employed, and perhaps the candidate is supposed to be aware of this.
Yes, I can think of couple ways to do lockless atomic increment/decrement but those won't translate well to multiply. Blindly assuming incr/decr using a generic lock that can cover multi as well without understanding how atomic operation work in the product is asking for trouble down the road.
You can't take that for granted, but it turns out in practice that memcached does not use atomic add, and so it happens to be straightforward to support multiplication with the same set of locks.
I think it depends on the need of the company. I’m reminded of Joel Spolsky’s “smart and gets shit done” adage (and then book): type 1 is smart, and you want some of these people.
But you don’t want people who, then faced with a task that can take 3 hours-and has been given time pressure that requires it to take 3 hours—instead spend 12 hours because they can’t live with the ambiguity of being uncertain that they’re using locks the right way.
The approach described in part 2 is highly opportunistic. You generally want to hire staff who can be opportunistic when necessary, even if that is not always the right approach.
It's also not 4x faster. It's only 4x faster at the very beginning, until all the weird bugs get in the way and everyone's too scared to make changes because every little change breaks the customer experience.
I'd find this question frustrating because it feels like a trap. I was looking for the trick -- was I missing something in the question that made the obvious approach untenable? No, the answer is simple, just add a way to multiply numbers with the multiply operator.
"They gave him an intelligence test. The first question on the math part had to do with boats on a river: Port Smith is 100 miles upstream of Port Jones. The river flows at 5 miles per hour. The boat goes through water at 10 miles per hour. How long does it take to go from Port Smith to Port Jones? How long to come back?
Lawrence immediately saw that it was a trick question. You would have to be some kind of idiot to make the facile assumption that the current would add or subtract 5 miles per hour to or from the speed of the boat. Clearly, 5 miles per hour was nothing more than the average speed. The current would be faster in the middle of the river and slower at the banks. More complicated variations could be expected at bends in the river. Basically it was a question of hydrodynamics, which could be tackled using certain well-known systems of differential equations. Lawrence dove into the problem, rapidly (or so he thought) covering both sides of ten sheets of paper with calculations. Along the way, he realized that one of his assumptions, in combination with the simplified Navier-Stokes equations, had led him into an exploration of a particularly interesting family of partial differential equations. Before he knew it, he had proved a new theorem. If that didn't prove his intelligence, what would?
Then the time bell rang and the papers were collected. Lawrence managed to hang onto his scratch paper. He took it back to his dorm, typed it up, and mailed it to one of the more approachable math professors at Princeton, who promptly arranged for it to be published in a Parisian mathematics journal.
Lawrence received two free, freshly printed copies of the journal a few months later, in San Diego, California, during mail call on board a large ship called the U.S.S. Nevada. The ship had a band, and the Navy had given Lawrence the job of playing the glockenspiel in it, because their testing procedures had proven that he was not intelligent enough to do anything else."
Medical people talk about this problem as "differential diagnosis". There can be multiple reasons why some symptom is exhibited (in this case, failing the interview). Its not enough to know that the interview question was failed - you also want to know what happened.
A much more common example is that (particularly young) candidates tend to make job interviews into big, stressful things in their head. Then they don't sleep properly the night before, and during the interview they can't think creatively or access deep memories (which are both well known symptoms of stress). They can't answer your questions. You think its because they don't know, but actually the problem is that they're too stressed to think at all.
Long programming questions work great for my anxiety because I can zen out while I'm programming and forget the interviewer is there. But everyone is very different with this sort of thing.
The more general way to work around this problem is to listen and pay attention to the candidate. If you asked "Lawrence" in this story what he was thinking about during this interview, he'd tell you about fluid dynamics and partial differential equations. That tells a story. Another candidate will tell you that they felt really awkward programming through an SSH connection because they're used to Visual Studio. Or how they really do (or don't) like some aspect of the programming style on display. Or how (to crib from another commenter) the programming was easy enough, but they're worried that doing so introduces a race condition between atomic multiply and add instructions if they're interleaved. And they hate doing the work because they feel like they're introducing a bug.
None of that really helps with stress bunnies, but you learn so much about what sort of employee they'll be by asking.
I had a job interview once where I was so anxious that I wasn't able to remember my own address when asked. Usually I'm smart enough to remember my own address. After the interview (no offer) I was able to remember my address and navigate back to my apartment without much trouble, so I think that lends credence to your theory that stress can cause memory lapse.
Yep and it also reinforces biases in interviewers. They start to think people really are just incompetent despite their experience - and start to believe even more in high pressure interviews. When the vast majority of “can’t believe the candidate couldn’t even do a for loop” is due to stressors.
> When the vast majority of “can’t believe the candidate couldn’t even do a for loop” is due to stressors.
This idea keeps me up at night, but do you have any evidence for it?
I've interviewed plenty of people with 15+ years of experience who really struggled to do basic programming tasks. And who didn't show any obvious signs of stress.
To this day I have no idea how many of them were not performing because they were stressed out of their minds but hiding it. Vs how many were simply not very good at programming. I have no idea how to tell the difference when they don't make it obvious to me.
No evidence - how would you even begin to get it. Just my feeling from doing many interviews as well. I still recall a CS PhD from a top school fumbling with the most basic for loop in an interview I was doing. I just can't believe people are this incompetent. I think the simplest explanation is nerves - even mild levels of stress are known to shut down people's ability to think.
Ergo, I strongly suspect the current tech hiring culture filters out anyone who has above average sensitivity.
I believe you. The struggle is, when a candidate is failing a job interview, how can you tell the difference between someone who's good but stressed vs someone who's just not very good?
Because weak candidates need to submit a lot of applications to get a job, most job applications are from weak candidates. Its easy to feel compassion for people who have panic attacks. Its much harder to make a filter for weak candidates which doesn't also filter out people with performance anxiety.
Seems like we don't even know how big a this problem is. Is it 1% of the candidates? 10%? 50%? I have no idea. And it sounds like nobody here has any real idea either.
I'd appreciate if you expressed your opinions explicitly, rather than as snippy backhanders.
> Do we use whiteboard interviews for standardized tests or certifications? No.
Your position is not clear here. Are you claiming a written exam would be a better assessment for programming ability? That sounds bad?
For what its worth, I don't think whiteboards are the best way to assess programming skill either. My preference is for supervised programming assessments - like this article suggests. Whats your preference and why?
>> Seems like we don't even know how big a this problem is.
> Seems like you don't know.
Correct; I don't know. That is why I asked.
If the answer is obvious to you, maybe instead of insults you could link to a study?
> their testing procedures had proven that he was not intelligent enough to do anything else
I see the final sentence as entirely detached from the rest of the story.
My take: Their testing procedures had detected he had lacked basic awareness of the situation. When he received a (written!) communication, he wasn't able to imagine that the author was not the smarter version of himself despite a multitude of clues. The idea didn't occur to him.
"Can you understand a complex code base, figure out how to zoom in on the section of interest, and add a function?"
Far, far too many people interviewing for coding positions can't do that. Though I question how many could whip out FizzBuzz in 5 minutes and would then fail the more complex version. At some level, either you can code, or you can't. And, importantly, at some point in your career (if you go down the standard management track), you will typically lose the ability to code. If you've not done it in 5 years, you probably can't do it, in a practical sense. Not that you can't re-learn, but you can't just jump into a coding interview and ace it, either.
If you find yourself at a point in your career where coding interviews are a thing, "having a side hobby project that requires coding" is a very useful way to be really quite good at coding interviews. Plus, if it's your hobby project, there's no "Hrm... can I talk about that?" sort of problems when you're asked about a time you X'd with Y. You can talk about your personal hobby project absolutely as much, and in depth, as you want.
This isn’t an “Advanced FizzBuzz” if it takes three hours or even one hour. The whole point of a FizzBuzz question is that it’s a fast, like 2-3 minutes fast, heuristic for separating the wheat from the chaff and this isn’t that at all. At some point it’s no longer a FizzBuzz and just a coding challenge. Even the author’s answer reflects that (though he calls it a FizzBuzz)
I don’t know a thing about memcached internals but, as presented here, with the potential encoding and concurrency issues means that there’s likely levels to the complexity here that will likely take someone quite a bit of time to resolve if they’re not familiar with memcached internals.
But again memcached and even C++ isn’t my thing. ¯\_(ツ)_/¯
"Jumping into a non-familiar codebase and being useful" is literally what someone is considering paying you for.
Years ago, I got tired of some owners of a company I worked for hiring bench techs (small business IT support and such) who'd clearly never opened a computer before (I don't know why, but as the closest person I couldn't focus on any of the work I was supposed to do when constantly being interrupted by bench techs), so I "broke" a scrap computer as an interview problem, rather extensively. RAM wasn't seated, a PCI card wasn't fully seated, I think the power switch was halfway pulled off (enough that it didn't work but was visibly wrong), and I did some terrible things to the partition tables as well. The goal, which it succeeded quite well at from my point of view, was to see where a possible bench tech's skills ended, then help them through. If you'd worked on computers a bunch, and one wasn't powering on, "reseating stuff" was a useful enough response, which would either clear the corrosion on some pins or ensure everything was actually planted in place, which would render this machine booting (after you pushed the power button connector back in), and let me observe what you did with some weird boot errors. There were Windows and Linux environment boot DVDs laying around, so, pick what you know.
The result of this was that our next bench tech hire had the skills to do stuff himself, and generally left me alone to do the stuff I was trying to get done.
The thing with these tests that people don't seem to "get" is they're not pass/fail, not really. If someone has good work to show it may be perfectly fine - they were on the right track, hit a snag, but it's clear they know what they're doing and would get somewhere.
In my experience with these questions, it is painfully obvious very quickly those who will and those who won't.
There already exists a function that does the exact thing the question asks for, handling all the atomicity complexity.
What this comes down to is can the candidate: figure out how to build the project, grep to existing implementation and copy, paste, change a plus to an asterisk. And also, can they articulate the work-to-be-done this simply.
I've never seen the first item in that list tested in an interview and .. I don't hate the idea.
What this comes down to is can the candidate: figure out how to build the project, grep to existing implementation and copy, paste, change a plus to an asterisk. And also, can they articulate the work-to-be-done this simply.
While that’s the gist of it, it hardly seems that’s quite enough from the author’s solution, nor should it be enough for an interviewer. The question isn’t how to write, or copy paste a function but how to add the command interface as well which, if one doesn’t know memcached, can be more complex then the function itself and isn’t covered by this trite an answer.
This might just mean that you've been overly conditioned to expect bad interview questions that do rely on a "trick". But if so, I'm sure you're far from the only person in that position. Would you be less frustrated if it came with a disclaimer, along the lines of "this is not a trick question"?
(I realize this might sound sarcastic, but I'm serious. I think the way interview questions are presented is very important, and often overlooked.)
My go to interview question for the last few years has been kind of similar to this one, in that it's about looking at, exploring, and modifying code that already exists. I also let people look things up, ask stupid questions, etc.
You would not believe how many times I have to tell some people that it's not a trick, I really don't mind if they google something or use `man`. That I'm not trying to trick them into having to know what the arguments of pthread_create are by heart, or that yes they can compile the program in coderpad as many times as they want.
Really unfortunate that so much of the industry thinks interview questions should be like Shyamalan movies.
(also sadly I'm changing jobs soon and I'll have to abandon this question and I don't think it'd fly in the new job's interview policies anyways)
I’ve had plenty of interviews for software engineering jobs where I’ve been asked trick questions. Like “explain why this code is broken” when it’s not actually broken.
I downloaded the code and had a look. For context, I've ramped up on quite a few code bases in my day in a bunch of different languages. Here was what I thought would happen in order of likelyhood:
1. The code would not build at all on a standard linux distro unless you knew exactly what packages to install. (I had an interview like this once)
2. Locking the key was going to be inexplicably linked to the operation (got very worried when I saw `add_delta` and `do_add_delta`). Turns out add_delta just calls do_add_delta wrapping it in a lock. The lock API is very simple.
3. There would be a massive amount of code duplication that would make it difficult to add this feature without doing serious refactoring. Turns out you basically just change `bool incr` for `math_op_t` or something and provide a way to map from binary/string input to math_op_t.
This would be very easy. Depending on how comfortable I was with the editing environment they talk about elsewhere in the hn comments it wouldn't be too bad.
I was waiting to see a weird gotcha about multiplication not being atomic or something. Actually, question for people with more low-level knowledge than me: is that solution atomic?
All math operations on your ordinary everyday cpus are equally non–atomic. They read a value, potentially from memory or from cache, operate on it, and leave the result in a register. Then your program probably has to store the result somewhere, or perform more operations on the result. This is why the memcached software has an explicit locking mechanism; any operation on a cached value involves taking a lock, doing the operation, storing the result, and then releasing the lock.
Multiply doesn't have an equivalent. You can work around that by reading, multiplying and then using CMPXCHG to atomically update the variable if nothing has changed while you were processing. But you need to think about the ABA problem[1]. (I think in this case its safe, but I haven't thought about it enough. Lock free algorithms have subtle bugs.)
Is ABA an issue here? "Replace A by X*A" should be perfectly fine if an atomic swap gives you A back, since it means it was actually A at the time when you perform the swap. That it was B a moment go doesn't seem to matter.
Updating an entry in a database table (which is what a memcache entry is, in practice) is not usually as simple as just "change the value from X to Y". Locking is likely necessary no matter what.
I think the test is whether you can grep an unfamiliar code base to find the implementation of existing functionality, understand it, and modify it to add very similar functionality.
Like do the commands have to be registered somewhere? Is it a string mapping the operation name to a function pointer? A cascading if-then statement? Etc. etc.
It's just a test of diving in and understanding existing code in a relatively short amount of time.
What's the obvious approach? You do have to dig into the source of memcached and understand some internals. With good use of grep and vim it can be done quite quickly, but I wouldn't call it obvious.
What kind of skill does this type of test measures?
I think the main ones are:
* How much about the language the candidate knows. Not the control flow and data types, but how to navigate and where to look for information. And that leads to bellow
* how quickly one can understand a codebase (at least part of it)
and start delivering
What else?
If that's one of those "think out loud" tests it's possible to get a clue of how the candidate thinks, but it's not specific of this test.
If I encountered this test, here’s what I’d think (or hope) is being evaluated:
- Do you understand the problem, or seek better understanding if not? (Obviously the problem here isn’t extrapolating arithmetic, but identifying the importance of an atomic operation.)
- Do you recognize the value of the change, or question it if it feels non-obvious?
- How do you think about approaching an unfamiliar codebase/code path?
- How do you think about the challenges you encountered while working on the problem?
- What else felt important to you that I’m not looking for?
All of these questions provide a lot more information than “can you write code?”
Especially meaningful for evaluating actual fit is reactions to walking into pre-existing code. In fact I think this has been, at least subconsciously, my best heuristic for assessing colleagues’ skill maturity. “Juniors” will stare at a problem or ask a lot of easily answered questions; “mid level” devs will bang their heads trying to answer without asking; “senior” devs will have a good intuition for which challenges are discoverable and which are best solved by asking a human or a google or whatever, or at least for recognizing the distinction after some time exploring the problem.
Still winds up just feeling like a trick question: because the immediate response in any real world scenario is "why does the client need this feature?" when a CAS operation is right there.
But probably not what the interviewer wants to hear - maybe.
Aside from being a lot simpler to use, a dedicated "multiply" command could end up being dramatically more efficient.
If multiple clients are simultaneously trying to update the same value, then locking allows them to take turns with relatively little overhead. With compare-and-set, all but one of the clients would fail at the "compare" step and need to retry, requiring additional network round-trips.
That’s not or shouldn’t be a trick question! “An affordance for what I’m asked to do already exists” is an excellent answer. Probably the best answer in most scenarios, and the one I’d most hope/expect to encounter from a “senior”.
> * How much about the language the candidate knows. Not the control flow and data types, but how to navigate and where to look for information. And that leads to bellow
> * how quickly one can understand a codebase (at least part of it) and start delivering
> What else?
Those things are really the vast majority of programming in a professional setting.
I'd say your second bullet point encompasses a number of sub-skills, such as:
* Understanding common coding patterns, and inferring design intent from them
* Quickly prioritizing "critical" vs. "nice-to-have" components of a desired feature (and reprioritizing as you learn more about the implementation constraints)
* Being able to take a partially-complete or slightly buggy implementation, and quickly identify what's wrong with it (e.g. by interpreting compiler errors or using a debugger)
* Gaining confidence in the correctness of code by spending a reasonable amount of time on testing, without going overboard
Seems it measures how well you can use shell tooling to code. Personally, I find it foolish to ignore decades of advances in IDEs, so am not part of the cult of "you're not a real programmer unless you use Vim/Emacs". I also find it ridiculous to ask someone to learn a codebase with unfamiliar tooling.
Why couldn't a potential hire be given 5 minutes to think about a solution and then be asked what their plan is? What benefit is added by 36x-ing the time allowed that justifies putting the candidate through such pain?
I have seen people who can talk very nicely, but are lost once they actually have to touch the code. I have also seen people who are really bad at articulating their plans, but are OK with coding them.
Because there are people who are very, very good at bluffing their way through that first 5 minutes. If that's all it takes to pass the interview, your company is about to make a mistake, potentially a big one.
This would be a totally unsuitable format for this task. When doing this task, most of the time is spent looking at the code and reading what is already there. The actual changes needed are very minimal. It is explicitly not about complicated logic flow or algorithms like most interviews, but rather about a software engineering process and the ability to discover things yourself.
Also, the actual time limit (the real author of the question showed up in this thread) was 1 hour. I'd say in actuality it's roughly a 30 minute process.
If I were running this as an interview question I would do exactly that -- give them 5 minutes to think then 5-10 minutes to describe their plan.
But THEN, I'd let them actually try to DO it. Because there is a huge swatch of people out there who can describe (and sound smooth about it) but can't DO. And there are also some who can do but don't sound very competent when they are asked to describe.
Many interview techniques never assess the ability to DO coding, presumably because it is more difficult and time consuming to evaluate.
This is the right way to handle this problem. Everything after 5-10 minutes waste both your time and the interviewer’s time. There’s nothing you can discover that’s important after walking through their plan other than to figure out if they are familiar with code bases like the one you’re using.
Couldn't disagree stronger with this. In fact, I love this problem, in that it gets rid of most of the complaints about engineering tech interviews: (1) it's "real world" (for the job in question), (2) can be reasonably done in the time alotted, (3) involves a task that is most common for developers (modifying existing code), (4) shouldn't require extra preparation.
I have seen many times where a user can actually explain the solution to a problem, but the translation from algorithm-to-code is extremely slow, or not done correctly at all. There is real skill in being able to output code quickly, even if you already know the English-language description.
I disagree with that. If you’re hiring a programmer, surely you want to seem them write some code.
This question has the added benefit that they don’t have to write a whole program from scratch, they have to deal with a real–world program instead of a toy program created specifically for interview purposes, and they have to demonstrate that they can read and understand other people’s code. The latter seems really important to me, as apparently it was to the author of the question, because we spend so much of our time improving code that has already been written instead of writing completely new programs. Of course, I am especially good at these software–archaeology skills, so I suppose I could be biased.
There’s many many people, far more than not, who can make a plan, even write the code but can never actually get it to run. They can’t compile things without someone helping, and they sure as hell can’t debug to save their lives. This is a good test to weed them out.
> Via its incr and decr commands, memcached provides a built-in way to atomically add k to a number. But it doesn’t provide other arithmetic operations; in particular, there is no “atomic multiply by” operation.
Your programming challenge: Add a mult command to memcached.
Protip for people in junior roles: if you're interviewing for a more senior position, the correct answer would *not* be to jump in straight and waste 3 hours implementing something that then you have to maintain forever.
The best answer, from an *engineering*[1] perspective, would instead be noticing (or knowing) that memcached supports a CAS command (https://github.com/memcached/memcached/wiki/Commands#cas) that would allow to implement equivalent functionality without changes in memcached, and try to confirm why that would not be a viable solution. If, and only if, CAS is confirmed not to be a viable solution (e.g. excessive contention, unacceptable pX latency, ...) then you should spend the 3 hours (+ all the maintenance effort required until the end of life for that solution).
The risk, in case you jump acritically on the solution, is to show you did not spend any time trying to actually understand the problem and you did not consider the pros and cons of viable alternatives (and this is a problem both during an interview, as well and especially when you're doing actual work), something that is normally frowned upon in senior roles.
Obviously, being an interview, it's all play pretend so you'll eventually have to complete the coding exercise, but if I was interviewing you for a senior position and you skipped the part above it would be a pretty big red flag (same in the unlikely case I were to ask to implement some functionality that is commonly found in the standard library of most programming languages: huge red flag if you don't ask why the functionality provided by the standard library is not a viable solution).
During interview, we are not modelling the whole process -- no one would tell a senior engineer just a single sentence "please implement mult command" -- there will be a longer preceding story: maybe the latency has to be very low, or it is a request from a stubborn customer, or you are profiling optimization... So for the reasons of time, we assume that the previous steps have been done, and it was decided that the mult command is the way to go.
If I were interviewer and a candidate would mention CAS command, I'd compliment them for their memcached knowledge, and tell that the it is not a viable solution. And this would not affect my evaluation of this stage one way or another.
> So for the reasons of time, we assume that the previous steps have been done, and it was decided that the mult command is the way to go.
That's the mindset that leads to a crippled code bases. One should always question the methods that arrived to a potential solution. Ideally before spending hours implementing and years maintaining said solution.
I'd hire a candidate that thought outside the box with CAS on the spot, they offer more value overall than a Get It Done fast coder ever will.
Sure, but while I would mention that while talking over the question, I also wouldn’t turn up my nose at the chance to show off my programming skills. This is an interview, after all.
> no one would tell a senior engineer just a single sentence "please implement mult command"
It definitely happens (at least it did to me) to be given programming tasks without context. Maybe my comment wasn't crystal clear on this, but I was not providing advice for interviews to the specific company mentioned in the OP, rather general advice for senior roles. In this case the claim that "no one would tell ..." is hard to maintain.
> this would not affect my evaluation of this stage one way or another.
Same point applies here. We all know different interviewers have different ways to evaluate. Therefore it's helpful to cover your bases. Hence my comment to not skip that part. For some interviewers it won't matter, but for others it will be a rather important factor.
> and tell that [...] it is not a viable solution
Just for the sake of discussion: if I were the interviewee I would then definitely ask why it's not viable, and if I did not get a satisfactory answer it would for sure affect my impression of the process and, by extension, of the company I'm interviewing for. Not claiming that every interviewee is like this, just that this is the case for some (I am simply not pretending to be an exception on this).
I would hire the guy who notices the CAS in the protocol rather than the one who forks memcached, creating a maintenance burden, and potentially introducing safety or liveness issues (although memcached is somewhat simple, most databases are not, especially distributed ones, bring your TLA+).
And memcached isn't going to pull your changes either, because they want a simple protocol. It's not a CRDT-oriented project.
CAS is the correct approach. inc/dec are ok for counters but that's that. I didn't know anything about memcached - yet atomic w/o knowing the original value is an absolutely pointless operation. CAS (as in CPU compare-and-set/swap) is a universally useful approach.
Thanks for sharing this. It took me less than an hour to complete and I thought it was pretty fun. These sort of interview challenges that involve rapidly diving into a foreign codebase are really great, but it has to be done correctly. I once frustratingly & embarrassingly failed a live coding interview where I was asked to fix some sort of linked list issue in a fairly large C code base that wouldn't compile. I much prefer a challenge that involves implementing a new feature in a working code base (like this memcached question) as it's closer to reality and will also provide a better experience for the candidate.
- As people pointed out, it's advanced FizzBuzz (really advanced)
- I can easily see a lot of people being unable to solve it, not because they can't do it, but because of a "performance anxiety". The bar here is quite high, and the result is binary (does it work or not).
- I like that it is quite close to real life (that people have to read code, figure it out, write code). On another hand, again, what is not real, that you are parachuted into a unknown (quite big) codebase and expect to add a small feature in 3 hours.
The result is not binary - you can have it fully working, or forget binary protocol, or syntax accepted but multiplication not happening; or correct code changes done, but code does not build...
That is one of advantages of longer questions (compared to 5 minute one): even if task is not fully done, there are plenty of other signals.
Given that they're specifically hiring for high-performance database work in C++, I think it's a appropriate to have a pretty dang high bar. For less-intense work, I would definitely want to formulate a less intense version of this style of problem.
Somewhat got confused by the question given part 1. I didnt think we were asked to modify the source but just write a middle layer. Considering you want to stay in sync with the main codebase for memcache with updates and what not, my (potentially incorrect) solution was somewhat hacky
1. Client saves value in local memory. Then issues a command to update the number to a string version of the number (maybe with a flag telling any client to parse it as integer for get). Assuming setting a value is “safe”, this would cause other concurrent clients to error if they are trying to incr (according to example from part 1)
2. Client that did this update now multiplies that local variable.
3. Client updates value to final answer.
If a failure/crash happened you could still sort of retrieve the original value since it is a string version of itself.
Anyway it is hacky and sort of messes up if someone attempts to get the variable while it is a string. So I probably would fail this question.
I went down a very similar train of thought, making the same assumption. I think this makes us “type 1” engineers: wrongly assume that we are not allowed to break the existing code contract, and attempt to use it to implement a new solution “over” the old. Extending without modifying.
I took that as a kind of lesson in and of itself. I’ve certainly had to face code monstrosities and cut through multiple layers to discover the simple rewrite before. If somebody had modified instead of extended sooner, if someone assumed the existing solutions were not so sacred, maybe a monstrosity could have been avoided in the first place.
On the flip side I've also experienced the opposite. Someone modifies existing code from an open source library and then we aren't able to safely upgrade it because of tons of merge conflicts with the modifications that were added.
I think I've been reading too many interview questions lately that I originally thought this was a trick question that required implementing mult with only the provided incr
I like that this was a more of an engineering question about diving into an unfamiliar codebase than a gotcha math/algorithm question
Type -1 candidates quit on the spot when they discover that memcached has, in the years since they last worked with it, been bastardized to do things other they GET and SET!
The beauty of memcached in the 2000s was that it felt extremely opinionated about features — that is to say, it didn’t have any.
That bool incr argument in the source code feels like a hint from the authors. The arithmetic feature was meant to do two things and two things only, represented by I/D = +/- = true/false. Add (pardon the pun) any more operations and here be dragons. In particular once you add MULT, some clown is going to ask for DIV, and so are you now going to convert ints to floats or force integer division on everyone? Let’s place bets on how long until we get a “bug” report that 12,999,999 DIV 10 should be 13 not 12.
Maybe I’m being unfair. The integer increment thing in memcached seems useful. The useful part of it is the fact that it increments atomically, not that it does the heavy lifting of adding one for you :P If N people increment it you get a number that is N bigger. “Add 1” is also a pretty easy thing to get consistent if you have two people contending for a lock.
If it were a strongly typed language it would have a type like Counter (not Number) and it makes no sense to multiply a counter. The hint is also there in the INCR and DECR commands. They are probably not called ADD and SUBTRACT for a reason.
Feature rejected! Now let’s go code up MULT anyway :)
I'd say MUL(T) is absolutely useless in any practical manner. I'd take some form of CAS if I want atomic operations. MUL has to deal with overflow a lot more than inc/dec. Not knowing the value prior MUL, taking any branches is another issue, CAS would solve in that case as well.
Inc/dec are useful for a counter implementation (although I see no reason to have a designated dec) - even though counters should not be implemented in such a manner, e.g. each writer should have its own counter, and reader should sum them up when the value is needed. No contention, scalable.
In that regard I'd consider the interview question "weak", lacking a deeper understanding - not just show us you can code.
Oh it’s a great question. Code this solution without having to worry about why. If only the job was actually like that. The why is 10x the how, IRL, once you get beyond IC0.5.
> "They are probably not called ADD and SUBTRACT for a reason."
You'll be pleased to know they aren't called that in the protocol, but they actually are that. They take a delta value so you can do "incr 5" to add 5 and internally the function is named "add_delta" not "incr"[1]. (Beware because it's not quite subtraction, going negative is forbidden. (3 + 5 - 5 == 3) but (3 - 5 + 5 == 5)).
[1] which means the answer in the followup blog post (and my answer) to overload the same function to handle multiplication as well makes the function name even less accurate.
This is far from a great engineering interview question, let alone the best.
1. This question is heavily biased towards specific software experience and familiarity with the domain and tooling, while at the same time too shallow in terms of algorithmic and logical depth.
It is like asking how to parse a CSV file in Fortran. The difficult part is not the problem per se, it is picking up an unfamiliar language and tooling on the spot.
For example, someone from Windows background who wasn't familiar with the memcached / Linux environment and tooling / Messy open-source codebase might find it difficult to adapt within the time limit. That doesn't automatically make them bad coders. It is like requiring Emacs and DVORAK keyboards for coding at interviews -- In theory it's all the same, but it's cumbersome and slows you down if you are not familiar.
Challenges like this during interviews can lead to high false negatives. People might get stuck on small side things and not working on the main thing enough.
2. This question takes too much time to set up and complete.
Interviewers don't have three hours to wait. Interviewees can probably only do one such problem during an onsite and nothing else. Are you going to make a judgement and decision solely based on the performance of this one challenge? I don't think that's a good idea.
Also the interviewer needs to explain everything. For a three-hour challenge, this is at least 15 minutes time doing nothing but setting the problem up.
3. This challenge is actually not that great to identify senior / great coders while mediocre coders with specific experience can pass easily.
The question appears to be highly relevant to the position. I would hire the candidate that completes the feature request over the 10x Programmer that cannot.
> 1. This question is heavily biased towards specific software experience and familiarity with the domain and tooling
Heavily biased towards the specific software experience and tooling _in use by the company who is hiring_. That seems like a win to me. If you are a Windows shop, go find an open–source C# or VB.net project to use instead.
> 2. This question takes too much time to set up and complete.
You might have missed the part where the candidate is given a VM to ssh into, where all the build tools and dependencies are already installed. Also, the author of the blog misremembered; they were given only one hour instead of three for this problem. There were probably other interviews, and maybe lunch, afterwards.
> 3. This challenge is actually not that great to identify senior / great coders
That’s true. As stated in the article, this question was intended to filter out low performers. Other interview questions would be used to judge other aspects of the candidates.
"Chesterton's Fence" is pragmatic for getting things done quickly, but I hope people actually come back later to decide if the fence should stay or go.
Leaving fences without questioning why they exist builds tech debt.
I miss questions like this. Things that are closer to actually measuring "so, can you code?" and not if you can do CS puzzles, or how well you can shoehorn corporate "values" into answers.
I "passed the shit out of this question" and have failed my last several interviews... :( I miss 2010.
Yep, I love this kind of thing and it's waaay more similar to my actual job than the little self-contained problem on a "whiteboard" style.
The similar but different style question that I got (at Stripe, fwiw) in my recent job search that I also really loved was: Here's a failing test against a complex codebase; figure out what's wrong and then fix it. This flexes similar muscles I think: figure out enough of the flow of a complex codebase to understand where the problem manifests, and how to modify things to fix it without breaking other stuff. The really nice thing is: if all the tests pass at the end, you've done it. Then, time allowing, you can go work on improving the fix; making it more consistent with the architecture, etc.
My previous employer used something very similar: either adding small features or fixing known bugs of a past build of some open source software in 1-1.5hrs. We prepared one such interview question for each supported language (C++, Python, and Java), and, to help calibration, each question consisted of a series of requirements that required progressively more complex changes.
Both interviewees and interviewers seemed to like these questions. I do hope this approach gets more adoption in the industry. It takes more time to prepare, but the high-quality signals it provides are worth the effort imo. Such questions are also harder to leak compared to typical whiteboard coding questions and thus more reusable too.
So I hacked it out in under a half hour. Basically, I wound up cloning some code paths instead of modifying "add_delta". Not sure if I'd pass or fail... but it is a good question to see if someone is comfortable navigating a new code base.
I really like this sort of question since it allows a candidate to work their own way while showing their process and lets the company see if the applicant can handle the sort of thing that will be a good part of their job, at least for the first year or so. The popularity of white boards and writing binary tree traversal pseudo code is frustrating to me. I haven't done an interview where that's a part of it because the thought of it freaks my brain out a bit. Which seems odd since I regularly perform improv comedy and am considering teaching it in the near future.
I really agree with the article author. The skill of being able to get into a codebase and add an iterative feature like a multiply operation when there’s an addition is huge. It’s something I failed to do when I was a software engineer. I had to study and understand every part of the codebase before I did anything. I was paralayzed. But that understanding would have come over time if I just added features incrementally like I was asked to. I didn’t last very long at that position. Luckily I found data engineering and I much prefer that anyway.
Any interview question that takes 3 hours is bad by definition unless of course the 3 hours is a red herring and the interviewees who pass are the ones who do it in much less time.
Although I don't know C I really like this example as it's pretty much what happens when I start a new place as a Python dev. E.g. Hey, X, we need you to change the dates on this screen to show the timezone. - Queue me then having to drill down through the codebase to find and then fix the problem etc.
Only complaint would be the amount of time it may require to do in an interview.
First thing I would do is download and build. Then find he incr method/interfaces/etc. Then rename all of those to mult and try and rebuild and see if mult works identically to incr. Then start working on modifying incrementing toy multiplying.
Just for the record. Last time I was given to code something up within 3 hours, my task was to write a Minesweeper clone (pretty graphics optional). I ended up with a workable console-mode implementation, using Python.
I would love an interview question like this. For one thing, if I was unfamiliar with the domain, it would spark a conversation that would help me decide if I wanted to work with that team or not.
In this context I'd say: writing a piece of software (programming) vs modifying an existing, large project in a clean and correct way (engineering).
I can write a pretty good standalone tool to solve a problem, but throw me into an unfamiliar code base and ask me to make a significant change that meets all the necessary style/correctness criteria is a whole 'nother level, at least in an hour interview.
This is a great test (for bright folks who don't mind "doing free work at interviews"), because it directly answers the "Do you have what it takes?" question.
At my first job interview, they handed me the printed documentation for a programming language they invented (2-3 pages), and a problem. I didn't have any work experience with any technology they used, but managed to solve the test, and that was enough to get my first job, and drop out of education.
I love it--realistic example of working with the production codebase, and a chance for the interviewee to see the actual code quality + practices they'd be dealing with.
For anyone else who was initially confused like I was:
On first look, I thought the solution had to involve exploiting commands over telnet to inject a mult command.
It took me until looking at part 2 to realize it involved actually modifying the source code (which makes sense in retrospect).
If you are looking to actually try and solve it, keep that in mind and don't look at part 2 :)
While it took me under an hour, I still wonder whether my implementation would have passed an interview. I generalized do_add_delta to three operations (incr, decr, mult) and otherwise copied code paths and extended the binary protocol.
For a junior-ish software engineer level (say, 1-3 years out of college) who claims to have experience in C, this seems like a fine question, but for a senior software engineer (which I think is what he was interviewing for) it doesn't seem all that great, except maybe as a first-round filter [1]? Not that it's terrible, and I guess if the candidate fails it then it certainly tells you a lot—but if they complete it successfully then it doesn't really seem to tell you much about their software engineering skills or depth of understanding. It's better than some silly challenges companies give, but there's so much room to ask more probing questions or provide more challenging tasks to help you gauge where the upper limits of a candidate's abilities are, instead of just the lower limits.
Sorry, are there people who do an interview like this and then just treat the result as a boolean pass/fail?
I mostly do pair-programming interviews. When I do it async and have them write something, we'll at least discuss the code together. So either way, I'm way less interested in whether they finished than how they did it and how they think about it.
And maybe I'm just weird, but I'm not super interested in the "upper limits of a candidate's abilities". If they're using all their cleverness just to write something, neither they nor anybody else will be able to maintain it. Software is a team sport, so I want people who are good collaborators. Indeed, the "upper limits" thing makes me think of Aphyr's series of fictional technical interviews. (Start with the last one and work your way forward.) https://aphyr.com/tags/interviews
Why do you think that's a bad question for a senior engineer? I think it's great. This is much closer to the actual work that software engineers have to do!
That's what I do most of the time when I'm working with code, it's always an existing system that you need to add a feature to without breaking something else.
What do you think is a good question for a "senior software engineer"?
Because it doesn't tell you the upper limits of their abilities, only the lower limits. You hardly get any insight into their ability to design a new system, or into how they might deal with problems that require more than 2 seconds of straightforward planning, etc. Again, for a senior engineer, it's not really a bad question, just a very mediocre warm-up/filter question. It's just too easy to copy-paste or modify a few functions and get the task over with, and the result ends up giving you very little information other than "this senior engineer can, in fact, complete a very straightforward task".
Good questions aren't easy to come up with, I'm not an expert in interviewing by any means, but for a senior engineer you'd want to ask questions that start straightforward like this, but become progressively less straightforward (and maybe one that's a more subtle design issue) so you can get a better feel for their skills as you work with them.
Okay, I guess I have been in a position to interview senior software engineers who can "design new systems" all day on a whiteboard with artificial "interview" constraints but who could never actually solve this problem. I think this is a better question for senior engineers than junior ones. You seem to be assuming that a senior candidate who could blag their way through a "hard" design question in an hour could ALSO solve this problem and I don't know where you've worked but in my experience that is not at all true. In my experience, senior engineers often can't code much at all because they have worked at places with a ton of people writing code where code just "happens" and they are good at all the other processes that are code adjacent but "I haven't written much code recently". You better test that they can write real code and not leetcode.
I'm a senior engineer and I do a lot of design and architecture and code review every day but I would probably fail most generic leetcode tests and I think I would pass this one, so I feel kind of personally validated by that, so excuse the passionate response. :)
No one performs at the upper limit of their abilities on anything like a regular basis and it's basically a crapshoot whether your question and dialogue is actually able to test their limits. Most people, myself included, have a whole litany of things they need to perform well and doing it on-demand in a fundamentally stressful situation like an interview with a person you don't know is unlikely at best.
There's also a tendency for interviewer to choose questions from their personal expertise rather than something appropriate to the person or role.
It's only one question though. You probably shouldn't have a question that simultaneously evaluates many different things - the different signals would be mixed and harder to interpret. This seems like a good question to add a simple feature to an existing product and also seems very much like the work of a software developer - certainly much more like actual work than "Invert this binary tree" or whatever.
Given a Tech document that addresses the risk of change, the rollback plan, the security implications etc, and a suite of metrics demonstrating load and failure conditions with appropriate alerting. And of course the PR with appropriate test suite, shown to work in a test environment in concert with the rest of the ecosystem, that stuff is just table sakes. convince a director to approve the change. Ideally a director that doesn't report to the same vp as your org.
That’s, in my opinion, a good thing! It helps verify the engineer can competently code—in a real codebase, which is much more valuable (again IMO) than coding in a vacuum. Eliminating that question, which is often cited as the motivation for these sorts of challenges, is a cue that the candidate is qualified for further interviewing. That being out of the way leaves room to evaluate a lot of other things by:
- enquiring about the candidate’s thoughts about the problem and their approach to solving it, on which (again IMO) this candidate would really shine; you’d likely get a much less thoughtful or thorough account from someone less experienced or talented
- ask about concerns which would be typical in real world work but often get glossed over in interviews, like how they’d approach testing and SCM and review, or even just things they’d do differently outside the interview process
- get more conversational and learn more about the actual value the candidate would potentially bring to your team/org, starting this with some greater confidence, trust and familiarity on both ends
- dive deeper on technical validation if anything in the above feels questionable
Those sorts of conversations convey much more meaningful information than any coding challenge that isn’t very focused on very specific prerequisites for the domain/role.
The ergonomics suck. Whose laptop do you use? I’d want my own. Oh they want me to use sublime? Maybe I prefer Vim with all of my macros. I’m a professional and I rely on my macros and tools. Maybe it’s a PC and I only develop on linux.
It introduces an entire problem space where you’re evaluating something that’s neither important to you or your company.
Filtering out prima donnas who claim they need the exact right dev environment to write out a function is another feature of this question. They’re never productive developers.
This is ableist whether you realize it or not. People tailor their dev environments to be functional and effective for them not just to cater to their whims, many of us actually need certain accommodations to be able to work and to be able to do so at our level of effectiveness.
I’m quite productive thank you. But if you stand me up in front of a whiteboard or sit me in front of an unfamiliar code editor, I’m gonna produce nothing of value. If you’re filtering me out for that: thank you, I don’t want to work for/with you either
That makes it seem like a great test, right? It detects environments with mutual lack-of-fit. You don’t want to work there. They don’t what you to work there. Outcome from experiment: both realize this.
This is ableist too. You might be relieved to eliminate candidates who could be a great fit with some accommodation. You’re eliminating people unnecessarily, and it harms both you and the candidates.
As the originator of the question addressed elsewhere in these comments, they gave advance notice to the candidates and worked with them to provide an alternative setup as needed.
The accommodation is allowing people to use tools which allow them to be effective. The ableism is finding people “mentally limited” because they may need affordances (often but not always) physical to contribute.
Hiring is supposed to be qualifying. Qualified people using completely irrelevant-to-the-task assistive tooling is none of your fucking business. Unless the job you’re hiring for requires coding in Notepad, your expectation that interviewers do is discriminating probably illegally but definitely immorally. And probably also limiting your hiring pool in ways you don’t want.
Not a problem for me in this particular aspect, I’m never going to seek employment from you. But disadvantaging people who need accessibility tools is shitty.
I see that as being largely a non–issue. Either you use your own computer and share your screen with the interviewer, or you use a pre–prepared machine. An ideal interviewer would have three laptops ready to go, one each for Windows, OSX, and Linux, and would ask you which you prefer to use. Or three types of VMs to remote into, or whatever. (The upfront cost of that is miniscule if you’re asking a bunch of candidates this question over a few years.) If the interviewer can’t provide that much then they have failed your interview of them, and if you can’t be productive under those conditions then you will likewise fail. This is beneficial for both parties.
I think that this is actually pretty obvious to most people. Arthur only mentions this to say that getting a working build environment wasn’t part of the test.
Coding challenges in general, if they’re part of the hiring process, need to be much more accommodating yes. Even if they reassure you they’re evaluating “how you think” etc, this kind of thing is an anxiety amplifier for many (including me).
I haven’t interviewed on site with a coding challenge for nearly a decade, but last time I did I literally brought my own laptop and asked permission to use it (which was always declined). If I were interviewing today I’d literally walk out if that request wasn’t at least considered, and probably even still if it was declined without a really good explanation.
You'd be mostly reading code and writing <100 lines of C, much of which would be copypasta. I'm not convinced having the familiar editor would be that big of a deal here.
reading code (or more correctly, navigating an unfamiliar codebase) is difficult in a setup you're not used to.
I myself rely heavily on IDE click thrus and reference searches.
Purely using grep sucks, and a plain text editor's search function might also not function as smoothly. This all leads to friction.
This type of interview requires the candidate's own dev setup. If they could prepare ahead of time (e.g., give the repo to download and setup the IDE etc), then it would be fine.
This is an issue with many coding problems. I'm not sure if there is a term for this is detection / estimation theory, but it provides what I'm going to call asymmetric information. In this particular case, negatives are true, but all positives must be presumed false. If you make a problem too hard, you're faced with asymmetry in the other direction.
I think the usual solution to that is to follow it up with "what if you wanted to {introduce new/harder requirement}?" and ask them to explain & make as much of the changes as time allows. To give an example, if you were to initially ask someone to implement depth-first traversal recursively, you might follow that up with asking them why you might end up needing to do that iteratively, then you might have them do that iteratively for pre-order depth-first traversal (easier), then follow that up with post-order iterative traversal (harder), then you might ask whether/why it might perform poorly (e.g., re-visited nodes blowing up exponentially), then ask them how they might mitigate it (keeping track of visited nodes), then ask them how they might mitigate the space requirement (lots of possibilities), then ask them how they might handle parallelizing it for some particular problem... I can keep going, but the point is you can keep something like this going until they start to struggle, and that at least tells you the upper limits of their abilities on at least that axis. And you can pace it by e.g. skipping stages as you get a better sense of their skills.
I wish I could downvote this a million times. Do you literally have to do that at work and do you actually talk this kind of wine tasting algorithm bullshit in interviews? This has nothing to do with software engineering as far as building software for a business or any kind of meaningful work. Do you even hire people? This has to be a troll account. I enjoy code golf on a casual slack channel, but you can send me every engineer who fails that question in your interviews and I bet I'd find a few good ones.
That's a strange reaction.. Are you always able to answer any quesion that comes up at work? I know I don't. And so seeing people's reaction to qestion that is too hard for them is a valuable signal.
As long as one tells candidate in advance: "I am going to be asking a mix of questions, some easy some hard. I don't expect you to be able to answer every question", this sounds like a good question.
I'm grateful for all the engineering interview questions I've gotten in person, and not for any of the ones that were homework. Dude I promised myself as a child I would never do homework after my studies, that promise is sacred, your company is not.
So I never got caught out (EDIT: meaning to be clear caught out unable to solve the problem), which you would expect because in fact engineering interviews are actually no use for interviewing engineers, but in fact are great for interviewing for algorithmists, and you'd expect me to do well because I say I say I'm an algorithmist, and sure enough I did solve all of them. Nobody caught me with my mouth open, I've solved every single interview question without timing out, despite (actually because of) not getting a CS degree from Stanford (which I did attend). No advance prep, although that's bullshit, I spent every second I could on advance prep incidentally by working on algorithms.
But no leetcode. I did look at it once, at one problem, and it sucks, I saw a problem regarding getting the popcount of all the numbers in a range and tons of people got the top 100 points. That's stupid, I knew a superior answer that took morally no time, and would I get more than 100 points for it? Doubtful. There's a very low cap. Not truly elitist. There can't be a max score.
There is one problem that I couldn't get, and that was "how do you find a single element in a list that is not present in an otherwise identical other list?" Like tons of problems, the answer is to sort the lists and take a binary-search analog. That's the answer, but you need faster-than-state-of-the-art sorting. The thing I was supposed to say that they wanted me to say, not to be confused with "the answer", was you had to add the elements and then subtract. I instantly replied, "no, just xor them, less energy." Which was a satisfying response, and marked as a pass. I didn't get the job anyway on a gut check.
Unfortunately I did terribly at the very few jobs at which I did get hired. Because I could only get hired at doomed companies, I had between one and three weeks to prove myself, do or die. Died every time. One place, Unholster, gave me one day to set up my environment, seven work days to prove myself, and after a coworker spent an hour insisting I be fired and the boss saw the stories (epics? The point system thing). Ended up saying I objectively amounted to negative-one-half-a-person, concluded "With your curriculum, nobody would give a níspero for you." Níspero is a plentiful fruit that is sweet and is nourishing, but come to think nobody eats because nobody values it.
And this is NOT slander, well first off it's true and that's all you need legally, but secondly (morally) that's just what a doomed company is like, I'm actually not being negative. They had to fire like twenty people and had two employees left, and that's public information, tells you everything. Like basically all Chilean companies, they had bad Chilean debt, the Chilean bank leaves a tiny glimmer of hope, that's it, they have to grind through meat until they find the savants. Tons of companies like this, in fact I have a relatively high opinion of them considering. Really liked Unholster. They didn't cheat me out of money or time, so above average. And even when they were for sure getting nothing more out of the relationship, the boss agreed to just talking about why my career was fucked, for like fifty minutes. "I would sign [something for you] saying there's better guys at algorithms than you." "Sign what? Sign what, exactly?"
They were hiring another guy, my replacement, I met him, in the time I worked there and he also thought this would be his chance. And it was, I don't know how it went for him, but the point is it is a chance, only it's very very narrow. They wanted to be cool, too, read Hacker News. Smart guys, really good to talk to in the lunch breaks, I liked all of them. They were just living in a shark tank. And truth be told, I wasn't that good at the job, sure if I got three months like everyone's supposed to it'd be easy, but that's really easy, I told them I'd be much better than that. I wanted to be. I wasn't good at the job. Just at the interviews.
That‘s actually pretty hilarious. I submitted this, made my comment, and then immediately went to make a sandwich. I never noticed the truncated title; I didn’t even look. Thank you!
There is a reason memcached does not have mult. One could argue even incr and decr should be handled by clients. Starting to add more arithmetic support is scope creep. Why stop there, why not add sqr or bitwise ops, maybe a text based adventure game? Just blindly implementing this feature without any questions should be a huge red flag
I don’t completely disagree with you, but every interview question has some artificiality. This interview question cleverly moves that artificiality out of the way, so that it isn’t part of the programming task. Since the point of the question is to test the candidate’s programming ability, that is a great boon.
And it’s not like you can’t chime in after you’ve done the task to say that you don’t think that the multiplication operation is a good fit for memcached’s role. You might even get extra points for that; nobody wants to hire someone who can’t think intelligently about the tasks they are given. Just don’t blow it by refusing to do the programming task on those grounds; sometimes we don’t get the opportunity to choose every task we work on. Sometimes they turn out to be a bad idea, and yet it is still to our benefit to complete those tasks to the best of our ability.
My perspective is not from
someone taking the interview. I understand the goal here, but it's artificial enough it's something which actually degrades the product they are modifying.
It's also quite time consuming for everyone involved in the process. A lot of moving parts, where only identifying how to solve the problem is actually relevant.
I think all the moving parts, as you say, are in getting the build dependencies of memcached set up. As stated in the article, the candidates were given an existing machine to ssh into, where that had all been done already.
As for whether the modification “degrades” memcached or not, that is a great thing to bring up with the interviewer _after_ you have taken the opportunity to show off your programming skills.
You could say the same thing about web apps not needing fizzbuzz, but that wouldn't get you hired.
As an aside, I think atomic incr/decr are useful. I don't see how you would implement them client side without some more complicated mechanism (like CAS).
Absolutely. At Basho, the company learned over time that developers needed help using a distributed key/value store correctly, and started building CRDTs into the server.
Over the years of evaluating with it, we learned a lot about its quirks (both false positives and false negatives). At one point, someone got so frustrated that they threw their laptop on the floor. Happy to answer any questions about it!