Hacker News new | past | comments | ask | show | jobs | submit login
The Machine Learning Software Engineering Interview (lyft.com)
238 points by mgrover on Oct 31, 2019 | hide | past | favorite | 74 comments



This blog post was so painful for me to read.

This is a symptom of "bullshit" going on around in big tech companies. "bullshit" here is an economic term defined in the book "bullshit jobs". https://www.amazon.com/Bullshit-Jobs-Theory-David-Graeber/dp...

Reading through the post, I was noticing

So much corporate Jargon which really does not mean anything important.

Dehumanizing language when describing people interviewing and being interviewed and its process.

Too much obfuscation of ideas that can be very simply explained.

glorification of simpler problems into heroic challenges.

Delusions of Grandeur.

Today's such jobs are tomorrows layoffs.

I think I will stop here. I have crossed my negativity threshold for the day.


"Obfuscation" and "delusions of grandeur" are practically synonyms for ML and Data "Science" in this industry. I've been around for a while and I've never quite seen something as over-hyped and hyper-glamorized as these two specializations.


Calm down. Machine learning is a part of software engineering. Like multiprocessing, computer graphics or network protocols. It is here to stay. It is a part of a pallete of algorithms with which one can build software.


Your comment is absolutely correct but further points out just how far astray data science has become from any meaningful work. This issue is that a huge number of "data scientists" have limited programming ability and nearly zero engineering sense.

As a perfect example of this is the trend in most places I've seen where data scientists strive to increase the complexity of their model (so they can prove how "smart" they are). A huge part of a software engineering education (whether in the classroom or in dev shop) is learning that complexity is the enemy. No engineer would choose a 3 layer MLP over a simple linear regression for an imperceptible improvement in performance.

The additional irony of all this is that a decade+ ago a software engineer who had strong quantitative and numeric programming skills was rare and an elite find. You would have thought that the data science boom would have dramatically increased the number of these people but I find them even rarer.


What are these data scientists? Most statisticians I know would just use the linear regression unless they needed a neural network for marketing purposes. Statisticians will spend years studying linear regression and variations in graduate school. I thought it’s a CS guy who would be more fascinated with neural networks.


Well there are some fairly distinct camps forming in data science. You are correct that those coming from a statistics background would generally prefer simpler, more parsimonious models. There is a not-insignificant group that seem to be coming into the field via other channels (CS, boot camps, self-teaching, etc.) who view statistics as a field as a bit of a dinosaur and therefore the statistician mindset to be backwards. Simpler models aren't a good thing, they are a bad thing. Any amount of increased complexity is worth even a small amount of improvement in performance.

I think some of this is exacerbated by modern pillars of machine learning and data science. Competition sites like Kaggle are entirely based on maximizing test set accuracy, and so winning submissions these days are huge morasses of ensemble methods that are trained for days and weeks on GPUs, but in the end they are often only marginally better than some of the fairly basic standard approaches. And when companies like Google are building their bots for Go or Starcraft, they are using cutting edge techniques. When people see that and get inspired to get into data science, thats what they want to do, even the the majority of problems are more rooted in data quality, thoughtful understanding of the problem, and more rudimentary methods.

Its also the result of some of the rhetoric of important figures in the field. Yann LeCun has pushed back strongly in the past on criticisms of modern day machine learning's occasionally lack of concern with introspection and model understanding. Judea Pearl, a Turing award winner for his work in machine learning, devotes large portions of his pop-sci The Book of Why attacking the field of statistics on the whole, as well as engaging in multiple attacks on historical influencers in the field with such ferocity it borders on character assassination. He has even rebuffed modern critics, such as the very widely respected Andrew Gelman, by saying they are "lacking courage" by failing to accept his "revolutionary" causal inference methods over the traditional ones used in statistics.

The attitude is driven a lot by the people and institutions at the top, and as someone in the field, I unfortunately encounter this kind of thinking way too often.


Thanks for sharing your expertise. It was very interesting to hear your perspective.


Yes! It's one tool in the software engineering toolbox! It's a great tool for some problems!

Due to the hype it becomes a goal in some organizations however. "We need to do machine learning because we have big data" or some such. Doesn't matter if the problem could've been solved in 5% of the time and cost with 20 lines of code, thou shalt use machine learning.

It doesn't help that data scientists (creating and training the ML model) and software developers (creating and maintaining the software) usually come from different backgrounds, requiring a "data engineer" as an additional intermediary.

It always a problem with hype, blockchain (or merkle trees) has the same problem but worse, because the problems it solves well are rarer and more narrow.


To me, it seem to be larger than one tool. I think of it as a color in a pallete, with which one can paint software. Octarine.

To put this statement into context, I'm speaking as someone who had been writing code in C, from the era of PC XT. Perhaps NIPS 2010 was a rite of passage to ML for me. There is a screen, full of industry grade C++ and PyTorch in front of me, right now...


ML can be useful, but it is getting too much attention. Far more hype than the value it actually provides in many domains, IMHO.

Yes, I know that there are folks that deal with vast amounts of data with inscrutable relationships where you need fancy algorithms to make progress. But seriously, most problems just don't need it, and many folks would be better off with mastering basic statistics and data analysis.

It's fascinating how far you can get with basic stuff. My favorite? Statistics for Experimenters, by George E. Box. It's like a secret weapon! https://www.amazon.com/Statistics-Experimenters-Design-Innov...


Heh, given that I am starting to see more and more companies that offer ML engineers $2-6k/month (before tax), it's starting to resemble gaming industry in all its negative characteristics instead.


I cannot tell from your comment whether 2-6k/month before tax should be considered a lot or a little. I think in the major tech centers that 2-6k/month is quite low for anyone with significant experience (>5 yrs). Do you disagree?


They used the word negative.


Since he compared to gaming I think he's saying it's low


Does "blockchain" get an honourable mention?


Definitely.


Really?

Were you around during the dotcom era?

Although I'm not old enough, I've heard that OR in the 80s was the same crap.


Nobody talks about operations research today. But techniques that fell under that umbrella, like ARIMA and linear programming are still widely used, and aren’t going anywhere. (And it’s not without some irony that automated bulk time series forecasting is now sold as AI).


It's funny but at my last company, one of our systems used some linear programming to generate a model of physical processes.

The problem could have been tackled with greater accuracy using machine learning, but it would have taken a long time for the system to generate enough data points for a sound model and would have required more storage space. This was also complicated by the fact that the model had to be regenerated whenever the physical system being modeled was changed.

The linear programming solution was a lot cheaper and was "close enough" to serve as a useful approximation.


Linear and quadratic programming are amazing and totally underappreciated. Often they are the fastest way to get useful answers for problems (the solvers got really good over the past few decades).


What's OR?


What's OR?

https://en.wikipedia.org/wiki/Operations_research

Basically a mathematical approach to problems of logistics and scheduling developed first in WW2. Very powerful in the domains for which it was developed but less generally applicable than enthusiasts hoped, leading to the usual “hype cycle”.

If you have a problem OR could solve or just want to fool around with it PuLP is very easy to use https://pythonhosted.org/PuLP/ Of course the ease of use means that it is a commodity skill now.


There is also Google OR tools.

https://developers.google.com/optimization


Yep. Taxi routing service, and not even the best one, you'd think they're launching those taxis to Mars.

That said, SpaceX interview process is even more ridiculous. The first step is to talk on the phone with a non-engineer recruiter who has to ask you highly technical questions, but doesn't understand a word of your response, and you know it. They then sort of have to correlate what you're saying with the answers they have and decide whether you know anything or not. The most uncomfortable interview situation I've ever been in. Or at least that's how it was a few years ago, maybe they've changed it. I was so thrown off by this, I totally fucked it up and never got to the second step, in spite of nominally having all the right experience. To relate, imagine trying to explain low level assembly to a five year old, over the phone.


Not saying this is the case for spacex, but in my field (totally not space or engineering or software (but very much "tech" (physics/chemistry)) related), these types of interview are for weeding out the non-standard folks (of which there are many, including me but many of us (including me) have become good at hiding it). A person with high E would presumably (but not always because it is indeed a difficult task, casualties are regrettable but expected) "grok" the task and begin feeding the right keywords to the recruiter. Once you realize the game, it becomes fairly easy. Just read the job description and sprinkle the keywords provided therein.


Thank you so very much for saying this. When will these people realise that nobody gives a toss about their overly long and overcomplicated selection process.

And, these guys aren't even Waymo.



mission critical!


It seemed like a lot of words to say very little.


> This is a symptom of "bullshit" going on around in big tech companies. "bullshit" here is an economic term defined in the book "bullshit jobs".

Bullshit is neither an economic term nor an anthropological one. David Graeber is an anthropologist, not an economist, though he has written inexplicably popular books on economic topics that betray his lack of understanding of economics.

Bullshit is actually used as a technical term in philosophy occasionally.

http://www2.csudh.edu/ccauthen/576f12/frankfurt__harry_-_on_...

> One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share. But we tend to take the situation for granted. Most people are rather confident of their ability to recognize bullshit and to avoid being taken in by it. So the phenomenon has not aroused much deliberate concern, or attracted much sustained inquiry. In consequence, we have no clear understanding of what bullshit is, why there is so much of it, or what functions it serves. And we lack a conscientiously developed appreciation of what it means to us. In other words, we have no theory. I propose to begin the development of a theoretical understanding of bullshit, mainly by providing some tentative and exploratory philosophical analysis. I shall not consider the rhetorical uses and misuses of bullshit. My aim is simply to give a rough account of what bullshit is and how it differs from what it is not, or (putting it somewhat differently) to articulate, more or less sketchily, the structure of its concept.


> not an economist, though he has written inexplicably popular books on economic topics that betray his lack of understanding of economics.

"Debt" I think shows a deep understanding of the relationships economics has with history, philosophy, and society. Graeber knows he's not an economist but he's got a point to make and he's not shy about making it even though it says less than flattering things about some aspects of economics.

You're link is broken for me btw.


What point is Graeber trying to make in Debt? It seems to be “capitalism bad” but that may be too kind to the book’s coherence.

On Bullshit

https://en.wikipedia.org/wiki/On_Bullshit

https://noahpinionblog.blogspot.com/2014/11/book-review-debt...

> Now, this may sound a little silly - if someone wrote a book called "Metal: The First 5,000 Years," and then filled that book with stories of war and bloodshed, never failing to remind us after each anecdote that metal was involved in some way, we might be left scratching our heads as to why the author was so fixated on metal instead of on war itself. And in fact, that is indeed how I felt for much of the time I was reading Graeber's book. The problem was exacerbated by the fact that Graeber continually talks around the idea of debt in other ways, mentioning debt crises (without reflecting deeply on why these happen), the periodic use and disuse of coinage (which apparently is just as bad as debt in terms of enabling the capitalism monster), and any other phenomenon related to debt, without weaving these observations into a coherent whole.

> In other words, I am now angry at myself for paraphrasing the book, and trying to put theses into Graeber's mouth, because this is such a rambling, confused, scattershot book that I am doing you a disservice by making it seem more coherent than it really is.

> The problem of extreme disorganization is dramatically worsened by the way that Graeber skips merrily back and forth from things he appears to know quite a lot about to things he obviously knows nothing about. One sentence he'll be talking about blood debts and "human economies" in African tribes (cool!), and the next he'll be telling us that Apple Computer was started by dropouts from IBM (false!). There are a number of glaring instances of this. The worst is not when Graeber delivers incorrect facts (who cares where Apple's founders had worked?), it's when he uncritically and blithely makes assertions that one could only accept if one has extremely strong leftist mood affiliation


> It seems to be “capitalism bad” but that may be too kind to the book’s coherence.

have you read the book? The book is an exploration, and an interrogation, with so much to learn from that to say that about it seems pretty philistinic.

Maybe you were just summing up the review you linked from Noah Smith. I read most of it, it's a bit meh but Noah doesn't really seem to be trying too much in it. This though: "leftist mood affiliation". That's cheap 'preaching to the choir' language.

If you have a link to a more serious review I'd genuinely like to read it.


Shit, I remember most of the "bad stuff" in Debt predating capitalism by somewhere between centuries and millennia. Seems like a weird way to write it if its Secret Purpose was to be a long-winded hit piece on capitalism.


Sometimes, lessons from the past can help to remove some of the rose-tinted glasses that people seem to associate these newer companies with. For example, it's worth reading Enron's Vision and Values statement (http://www.agsm.edu.au/bobm/teaching/BE/Cases_pdf/enron-code...) from 2000.

I don't think there have been any fundamental changes since 2000 that would incentivize make communication from large public corporate entities to be more honest or logically rigorous.


Good source of data for training a de-bullshitter.

Input: A year and a half ago when we began scouting for this type of machine learning-savvy engineer —something we now call the machine learning Software Engineer (ML SWE) — it wasn’t something we knew much about. We looked at other companies’ equivalent roles but they weren’t exactly contextualized to Lyft’s business setting. This need motivated an entirely new role that we set up and started hiring for.

Target: We invented a position called a machine learning software engineer.

Input: First, candidates on the ML SWE loop go through Lyft’s hiring review. The review is a regularly scheduled session for a committee to study candidates with an unbiased perspective and decide whether to hire them. Working alongside the review committee is a separate panel of interviewers that provides technical feedback. This feedback is designed to help the committee decide if there’s a fit and, if so, the candidate’s technical level. At first glance, this review process may seem cumbersome. Examining the checks and balances more carefully, however, we notice that they are intentionally introduced to put friction on the hiring process. Having a consistent review committee unifies standards and eliminates bias.

Target: A committee and a group of interviewers evaluate a candidate for fit and technical level.

Input: Despite the what-ifs, being transparent about how we design interviews can improve our interviews. Call it enlightened self-interest: candidates invest time to talk to us and we mutually benefit from learning if there is a good fit. Even if there isn’t an immediate fit, positive experiences build brand and improves candidate sourcing. Maybe the candidate can reapply when the timing is better. Practically, hiring an engineer easily costs tens of thousands of dollars. By showing how we iterate on our interviews, we reveal what we truly care about and how we try to probe at them, hopefully adding to the virtuous cycle for the hiring pipeline.

Target: We don't respect the time of our readers and are hopelessly unenlightened on this fact.


Left unsaid is the "everyone gets a veto" crap that destroys hiring 10x contributors. I have personally been involved in numerous interviews (as the interviewer ), where my fellow interviewers have deliberately shitcanned candidates because they seemed extremely well qualified and highly motivated (as compared to my colleague). Everyone gets a veto is almost uniform in my field and it is a massive problem.


Input: Lyft’s high-level problems that stem from its business context

Target: We need to route taxi cabs.


I have another target for your second one: We share a Google Drive full of resumes and people put their names on a spreadsheet when they like the candidate. Then we interview them!

It’s super innovative and fit to our business context!


Yeah actually it would not surprise me if their article was generated by GPT2 using your "targets" as input.



This just reeks of mostly inexperience with a touch of arrogance.

> We looked at other companies’ equivalent roles but they weren’t exactly contextualized to Lyft’s business setting.

Ok, so what does Lyft need then?

> What are Lyft’s challenges (and can a specific role help)? > What should the role be with respect to the organization’s goals? > What are the desired skills, knowledge, and talents given the expectations for the role?

Umm. Isn't that what every company hiring looks to address?

> What’s left is to define the necessary ingredients for what a successful hire looks like vis-a-vis (3) in Lyft’s context: > > Skills acquired through practice, > Knowledge learned through study and personal experience, and > Talents that make each candidate unique.

Ok, I'll stop reading now since "Lyft's context" is no different from any other company let alone one looking to hire a Machine Learning Engineer/Scientist.


Summary: we do not really want you to work for us, we are too busy trying to understand WTF do we really want. Amount of BS the article is staggering IMO


I thought it was because I was reading it right as I woke up, but I've re-read it now and it's almost dizzying how it's cloaking the details but written as if it's open and clear.


When I was trying to decipher it I thought it was written by somebody in a state of delirium .


> In the context of the modeling onsite, we ask open-ended problems with sufficient business and problem context such that the candidate can clearly identify an ML-based approach to solve it.

I'm disappointed the author wasn't more specific about where the line is drawn between "ML SWE" and "Research Scientist"/"Data Scientist" when it comes to the core ML competencies like model selection, evaluation, and design.

Having worked in teams with Data Scientists and "ML Engineers" it's been murky whether me and others in my team that were not the former were Data Engineers, Backend Software Engineers, "ML SWE", or "Software Engineer - Machine Learning".

This area of software is rapidly developing and there's no semblance of the bright line that tends to exist between Backend Engineer and Frontend Engineer for ML-involved engineers.

I'm personally interested in moving into the "Software Engineer - ML" space (or is it "ML SWE"?) and thus I have to find out what the right balance between Software Eng skills and researcher skills is. I think I have a decent sense of real-world ML basics but would quickly flounder when pressed for details on mathematical technicalities of different modelling approaches.

Lyft's job listings have these items for "Software Engineer - Machine Learning":

> 5+ years (or Ph.D. with 2+ years) of industry or research experience developing ML models

This really seems like the purview of a research scientist or a data scientist, if I understand the meaning of "developing" correctly.

> Proven ability to quickly and effectively turn research ML papers into working code

This seems fair enough, but my experience has been that the data scientists are doing this mostly, while the "Software Engineer - ML" people are preparing data pipelines or batch training systems.

> Deep knowledge of ML libraries like scikit-learn, Tensorflow, PyTorch, Keras, MXNet, etc

I know it's a job ad and thus it is a 'wish list', but this seems unreasonable.

Guess I'll just keeping studying 'all the things'. Brb in 5 years.


Undergrad here. This is kind of disappointing.

I really want to work in this field, but it seems I must have a PhD at minimum and not sure if I want to take that step yet.


You don't, though a MS may be borderline necessary. Without a PhD you will likely only lose out on some jobs at companies that are doing hardcore ML research or are simply more invested in the DS hype than actually looking to solve problems with DS. There are plenty of companies out there with data science needs that can be filled by non-PhD data scientists.


I think you underestimate the level of software architecture and engineering skill that people with formal training in graduate level statistics bring to these jobs.

I manage a team of machine learning engineers in a mid-size ecommerce company and I can tell you that the same person who is optimizing Dockerfiles for better layer reuse & figuring out how our CI pipeline will safely get secrets needed to retrieve model files for integration tests is the same person researching new variations of triplet loss in a paper from arxiv they present in our team’s journal club & developing Bayesian hierarchical regression models and explaining why partial pooling using an industry-specialized prior distribution actually produces coefficients in the model fit that have a meaningful improvement over OLS for some business outcome.

The same people that are refactoring a collection of unit tests to be parallelizable and save us 45 seconds on every test run are also running huge hyperparameter tuning experiments to get a learning rate scheduler that allows us to reduce training time for an in-house deep GRU neural network from 24 hours to 15 hours, and they are defining infrastructure as code tooling to even create the very GPU environments where this training is taking place.

The skill set really is extremely different from general backend engineer with a proclivity for hacking around ML models and also is really different from “research developer” who rarely deals with end to end systems or necessary concerns of production or quality code factoring, and also is super different from data science which is effectively just ad hoc business analytics but with more impressive pedigree on the resume.

I’d define a machine learning engineer as taking someone with several years of graduate experience in statistics / machine learning inclusive of formal probability theory, analysis, topology, Bayesian stats (or work + research experience that is equivalent, though it’s super rare for this to be able to replace formal graduate math training), and then adding senior level skill set in high-performance computing, generalist backend engineering, system architecture, and full management of the lifecycle of complex production systems.

The only piece of common modern engineering that I would say ML engineers typically don’t have a senior-ish level of command over is frontend development (though some do out of just hobbyist interest).


I don't think I'm underestimating "people with formal training in graduate level statistics", but quite possibly the particular people you think of when you say the quoted phrase.

I've worked with stats/comsci/bio-informatic/ML PhDs at multiple multi-billion-dollar software companies now and it's certainly not true that even the majority of people with graduate stats training are excellent software engineers. Would love to work at a company where that's true, I just don't think that's the common case at all.

> The skill set really is extremely different from general backend engineer with a proclivity for hacking around ML models and also is really different from “research developer” who rarely deals with end to end systems or necessary concerns of production or quality code factoring, and also is super different from data science which is effectively just ad hoc business analytics but with more impressive pedigree on the resume.

Totally agree. The terms are very messy though. The "ML Engineers" with PhDs in ML are still called "Data Scientists" at Zendesk, for example. At other companies "Data Scientist" just means 'Data Analyst that knows SQL'.

> I’d define a machine learning engineer...

Would be happy with this definition. It's much clearer than what you can gather from Lyft's post. It's close to the definition I currently hold to. On this definition I've enrolled in graduate Maths+Stats as I'm still almost entirely a Data Engineer / Backend Engineer in terms of skillset.


Your definition of ML engineer comes off as being a high-risk individual to have in an organization. I would rather split those into two separate orthogonal roles and have redundancy in my resource pool. It seems like a very difficult and scare resource to hire. How can Average Corp even think of hiring someone like that? Dead no from me.


The problem is you can’t really split them into two. You need the same person designing models to also be making tactical decisions about software design and production lifecycle because those things are almost always highly dependent on nuanced specifics of the model itself.

There is not a division of labor where one person makes the model and then throws it over the fence to a team that manages its production usage and lifecycle. That team over the fence would not be able to do it, and I’ve seen this attempted org structure fail hard everywhere its been tried for ML services.

What’s telling is that you bring up the personnel risk but you don’t consider the value add. When Average Corp hires someone, it’s because that person brings more value than they cost, period. It’s not because the person is “not risky” in some vacuum of decision making like you’re painting it out to be.

> Dead no from me.

That’s fine and all, but it usually indicates tech death of a company, and likely you have brain drain in more area than just machine learning. If you aren’t willing to take the risk and do what’s needed to structure the work and job to extract value from high performers, that’s a sign of corporate mediocrity and I think anyone with the skill set to be an ML engineer like this would already not even be applying to work in a place like that.

Honestly the risks are even worse than your comment says. There are also big risks around keeping this person intellectually engaged, giving them valuable job experience.

You pretty much have to pay them a lot, give them good work life balance, give them budget for conference travel & continued learning, give them meaningful upward career & compensation growth, and give them meaningful projects.

I look at this and think, yes, if the business can’t give all those things and still be coming out ahead on the person’s productivity, then you don’t want an ML engineer.

But more often the company needs an ML engineer and absolutely would gain more from their productivity than they lose on supporting all those job quality aspects — yet managers just take superficial offense at these demands and balk at the idea that you have to provide meaningful projects and career growth instead of just barking orders and expecting them to put up with work that does not help them grow.


> Most companies are open about the expectations for the role being interviewed for, the interview process, and preparation tips

When did this happen? I haven't interviewed for a few years, but nothing was open or clear then


“Most” is certainly wrong. I’d allow “some”.


I think this airy, self-aggrandizing post full of BS will be very offputting to data scientists and MLE's. Very low information density here.


that sounds perfect for data scientists and machine learning 'engineers'


I'm confused at the animosity towards ML SWEs and data scientists within this thread. Yes, there's a lot of hype/BS surrounding the field, but there's also some very highly technical people making very novel discoveries.

My question is: why the hate? Isn't a machine learning engineer just as valid as any other engineer?


It would be great if companies would allow you to bypass the code challenge if you grant them read git access to a relevant project that you have ownership of.


This is a common sentiment among applicants, but after many years of hiring people, just FYI, looking at people’s github projects is one of the lowest quality signals I get from an applicant. It takes me forever to wade through someone’s code to figure out what they did or how good they are, and when there are multiple people on the project, it’s not easy to see who’s driving or what the cooperation/conflict situation is by looking at the git history. It can also backfire in a variety of ways you might not want and not know about, so keep this in mind. Your personal project code sometimes has lots of decisions in it that don’t look particularly good to a very experienced engineer. I can’t even count how many people have sent me github links for code they’re proud of that when I study it makes me second-guess the candidate a little bit. As an interviewer, I honestly prefer to hear the candidate talk through their code so I can find out they’re learning and hopeful rather than read code that says more about today’s limitations than tomorrow’s potential.


I had to read through your comment twice to get it. My experience leads me to agree with your idea of letting people talk through their own code. That category of conversations has usually been very rich and telling, much better than one over imaginary binary tree rotation algorithms or a "design a url shortener"-ish question. My favorite part is where a skillful candidate can lead the interviewer to interesting and real problems and solutions, demonstrate ability and both get to enjoy the conversation.


On a whim I responded to a recruiter last week about a job. The first step was an online coding challenge. The company wanted a "Java developer", and I told the recruiter I have never used Java regularly, but given my experience she said it wasn't a problem. I figured the code test would be some simple filter stuff.

Instead, it was five algorithm quizzes... all in Java. I had an hour to finish it and, after spending the first fifteen minutes searching for Java standard library stuff just so I could write the code I wanted, I decided this was a waste of my time and closed the tab. This was all before I could even speak with someone who works at the company to find out if I cared to move forward.

I worked as a hiring manager for a couple of years and have had input into the hiring process for more than a decade. I've never found it difficult to screen someone out who had no place applying to begin with by taking a fifteen minute call. This at least gives them a chance to ask some questions and they don't feel like you're yanking them around.


Problem is its easy to cheat about whats yours


As if regurgitating someone else's leetcode answers is any better? Or googling or stackoverflowing answers? Or a take home challenge?

Even at the ML level, engineering is more about applying known good techniques and less about new innovative ideas.


Companies think regurgitating leetcode is better... the number of problems that can be asked is quite large. Memorizing that search space in itself takes tenacity and a good memory...of which are signs of a good employee (regardless of their current coding skill set)


I think the trick here is have people talk through what it does, how and why (as in what are the tradeoffs, what other approaches could have been taken, etc).

You can't fake this. Or to put it better, even if the code isn't yours and you can do this well, it doesn't even matter that the code isn't yours as in doing this you by definition have the skills and knowledge to reimplement it anyway.


You can absolutely fake this. There’s a huge difference between coming up with and implementing something vs understanding it enough to be convincing.

Have you actually tested this out?


Yes I've done this both as an interviewer and interviewee, it's a pretty valuable exercise I've found. Not a silver bullet, not the only thing you can do, not appropriate for every situation, but generally good.

Regarding your other point I agree with you, but if the main point is to evaluate understanding then the difference doesn't really matter. If you're testing for ability to invent concepts etc, then this might be more important to you.


Okay, but at this point it sounds pretty much the same as a traditional interview where the candidate discusses their prior experience.


Not much harder than with a take-home challenge. Either’d be tough to do convincingly.


what an absolute mess. this is the private corps equivalence inefficiency of the same madness i see with union jobs in the public sector.


To save some people the read, it's the same leetcode bs performance as anywhere else.


Enjoyed the post but I found this sentence interesting.

>As a candidate, it’s easy to get a foot in the door and be evaluated by an interviewer.

Do people really feel that way about a ML SWE postion at Lyft? I would be interested to see what percent of applications they receive make it to the door.


The article does hint at the percent: “YMMV: healthy numbers for an interview loop for the phone screen to onsite, to offer, and to offer acceptance are, for example, 50%, 25%, and 70%.”


Wow this post is really up it’s own butt, talk about self important




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: