A major problem with data in the job market is that there are bad actors on both sides, employer and employee, with incentives to lie about themselves; ie companies with toxic cultures won't reveal that, nor will malicious or incompetent job candidates. Many readers of HN know that companies can be bad, but don't recognize that, say, competent software engineers are not the median candidate applying for most roles. The other side has a filtering problem, too.
A second major problem with data is that it's often hard to know why something didn't work out. Yeah, not everybody's a good fit for a role. But where would you get an objective source of truth about an individual's performance. Or ... transpose it to dating: would you trust a group of ex-girlfriends' or ex-boyfriends' to give an accurate assessment of someone as a partner? Might they have incentives to distort the record? Where does good data come from?
So it's not just that AI lacks the data. It's that there are structural problems with ever gathering the data. It's not like the data's out there and we just don't have access. If it even exists, it's poisoned at the source.
The best hires I've seen in my life have come through word of mouth referrals. No data, no recruiters, no leetcode, no HR screenings. Good people know other good people. They trust one another, they are motivated to discuss opportunities of mutual benefit.
Of course that doesn't scale when you're trying to fill in a large team quickly, but at that point you have to realize that "average" is about the best you can hope to get.
> The best hires I've seen in my life have come through word of mouth referrals. No data, no recruiters, no leetcode, no HR screenings.
Nepotism. It gets a lot of bad rap but the reason this is effective is because you're essentially comparing hiring via a piece of paper vs via intimate knowledge of a candidate's actual knowledge and work habits. The latter clearly has more information than you could get from any screening process. Of course this doesn't mean you should just hire someone because they're a relative or friend because that's not actually using the information gain. But we should recognize why the process is successful. If we want to make a fair system that doesn't rely on nepotism then we have to also recognize why it works in the first place. I don't see these complicated hiring processes doing this.
Of course, you can also just use a noisy process. But if you're going to use a noisy process maybe don't make it so expensive (for company and candidate; money and time). If this is the method, then the process should be about minimizing the complexity and costs of hiring.
Some of the worst hires come through word-of-mouth referrals, too. That's how you get unqualified nepotism hires. In fact, networking can be thought of as a form of nepotism or cronyism (depending on how touchy a given person is about "nepotism" being applied to the manner in which they got their position). I know I'm likely alone in viewing such hiring practices - feelings over process - as signal of an impending collapse. I think it's a sound heuristic, though, at least for previously-established companies. If you're not connected to eminently-qualified potential hires (and you're probably overestimating the quality of your network), "hire my friends" is a sure way to end up with an increasingly incompetent workforce.
I've gone back and forth on this and now am firmly in the feelings > process camp if you have competent folks making the hiring decisions and you train everyone well in avoiding unconscious bias.
The reality is that strong soft-skills are crucial to hiring the best talent and those skills cannot be "objectively" tested. Even software engineering to a large degree requires a lot of design-related traits that require subjectivity in assessment.
In that case, wouldn't that select for generally likable people? Idk sounds like evolution to me.
Of course, though, that's going to encode some existing biases in the data, but we could keep adding them to protected classes if society thinks they're worthy of inclusion.
That last part is pretty flawed today and changes slowly, I admit, but overall this system sure as heck solves more problems than I could come up with. At least, I think it converges over time.
The hiring process is basically a stochastic process with time wasting theatrical performances on both sides.
I honestly believe at some stage there should just be a random number generator making the selection. Especially if the final selection is between a couple of highly qualified candidates.
I am a big fan of both, word-of-mouthbreferalls and sttict hirijg processes, read Amazon (without leetcode, I am no dev). If youbsomeone good, get them the first invite for the phone screening or even an on-site interview. But then recuse yourself from the hiring process, don't interview, don't participate in the decision making. That way, you are not the only one influencing the hiring, ypu just made it easier for someone to get the first foot in the door.
Of course this isn't perfect, people might be biased to please higher ups or collegues by prefering those referral candidates. Or you might end up indirectly influencing thebhiring decision, intentionally or not. But still better than some AI based CV screening or pure nepotism.
Speaking of nepotism: on LinkedIn via my network, I saw a senior manager in a Bay Area company asking publicly for internships for his university age daughter.
I shook my head in amazement, then felt absolutely mortified on behalf of that poor woman.
At best, she is living in her father’s shadow. At worst, she is being denied valuable life lessons of the feeling of being allowed to fail and get back up.
At best she gets a great internship and then a great job. At worst she didn't want him to do that and won't accept a position she could have gained that way.
Without suckling on your mothers breast you would be dead. Without that 2nd grade teacher you would be dumb. Without the near infinite external inputs that have gone into each person they would not be the person they are, in the place they are. All that is before we get on to the role of plain old luck.
None of your achievements are entirely your own, we are all standing on the shoulders of giants.
Being born in America is winning the lottery in many ways. Should Americans pursue jobs in third world countries to avoid failing to succeed on their own merits?
Shoot, you'd really need to pick the worst job in the worst country as your starting point to know you really had what it takes.
Honest question, how is this meaningfully different from a recruiter?
Sounds like a recruiter with intimate knowledge of the candidate and a strong network. A biased recruiter, but aren't all? I mean a recruiter doesn't get paid unless they get someone a job (and they get more money for a hire paying).
> No recruiter I have ever worked with has done something for me because I was related to them.
This is missing the point of my question. I'm assuming that the daughter has more to her merit than just her father's connections. If she is getting a job she isn't qualified for, different story, and I think everyone agrees with your point. If she is qualified and the dad is simply helping extend her network (like a recruiter) then what is the difference?
If you only look at the surface then you've missed everything important.
> Come on man, are we really that cynical towards meritocracy that we are accepting nepotism as normal?
1) Meritocracy doesn't exist and can't exist. I have several arguments laying this out. It's worth mentioning that the article is also arguing this. The problem is that the system is noisy. Even considering perfect resumes and perfect hiring managers the problem is that merit is so convoluted that evaluation is near impossible. How do you evaluate a software engineer perfectly? Do lines of code matter? Speed in which they work? Ability to get along with the team? Does pedigree matter? Lines of code from others that utilize their lines of code? Ditto but weighted by net profit corresponding to said lines? Can that even be measured?
2) How do you even measure a potential hire where you don't have any of the above metrics? Does pedigree strongly correlate with productivity? (most people argue no. I argue correlate yes causal no) Does it correlate strongly with LeetCode/Whiteboard problems? If yes, then explain the cheater threads that are so common. So what metrics should be used to evaluate? Again, the article argues (in depth) that the process is extremely noisy, nuanced, and complicated.
3) Which nepotism are we discussing? We'll say that a candidate has 2 attributes: merit and nepo. If merit dominates, then nepo is a means to reduce noise as you get information from a trusted source about information that cannot be gathered through resumes or interviewing (this is not different from calling a previous employer, except that there's higher risk in contacting their previous employers (which is why this doesn't happen) and previous employers are not a trusted source). On the other hand, if nepo dominates then no one disagrees with you and we've all been explicitly clear about this. Nepotism is only useful as a means of filtering through already (statistically) equally qualified candidates.
> are accepting nepotism as normal?
Accepting that it is normal doesn't mean you have to like it, it just means that you are recognizing that this is fairly status quo. Should we put in efforts to reduce nepotism and increase our ability to judge a candidate (and current employee) based on their merits? Hell yeah. But it is a fool who believes we have such tools today. And even a bigger fool who perpetuates the system of noisy evaluations. I'd personally love to have a system that is purely meritocratic. But the truth is that Goodhart always wins and if we don't recognize such flaws we are only creating a worse system that is masking as meritocracy. Few people even ask themselves why Goodhart always wins, and doing so also perpetuates this tomfoolery. I've even answered why Goodhart wins, and let's be honest, how many people caught it?
Arbitrary opinions that cannot be decided by mere bias in data aggregated by an AI of any sample size or source. This is the fundamental flaw and why it's all horseshit.
You know artists many years ago already covered all this.
"competent software engineers are not the median candidate applying for most roles"
Most developers with a few years of experience are pretty competent. Most employers are not toxic waste dumps.
Give the worst interview candidates a try and they will do well. Give the best interview candidates a try and they will fail or leave early. Interviewing is a skill, anyone who is extremely good at it most likely practiced more often and that could mean they needed to. The more experienced and poor interview they give the more of an unearthed gem you've discovered
I just interviewed someone who I am certain lied on their resume; they could barely write valid code during their interview, despite supposedly being a senior engineer with nearly 10 years of experience.
Randomly hiring the first people to apply (or "giving them a shot") is a terrible strategy unless you have tons of money to burn and no deadlines to meet.
We have internship positions for people willing to learn, but when we are trying to fill a senior position, we expect people to be at a certain level.
It's amazing how fast coding skills can turn to shit (or in more polite terms, atrophy) if they haven't been accessed for, say, a year or so (and you were not super burned in with that language). Take that, add a bit of performance anxiety - that is to say: basic humanness - and there you go.
Doesn't mean you have to hire them. Still, sometimes we just don't have enough data to tell whether someone is lying or not.
If the "coding skills" were shallow idioms in a few languages, I agree. This only proves why it's important to get an education and stay curious beyond mere trends.
I'd like to think I'm a competent enough engineer, but I usually don't perform well in those live coding interviews, whereas I tend to do well enough in the "take home" tests. I wouldn't even necessarily say its a problem with live-coding, I've done things like pair programming before and its always gone fine, but an interview is an entirely different setting.
There's nothing I hate more than coding exercises in technical interviews, but the team wanted it, so there's a pretty simple one. I try to make as clear as possible in the preamble before we get to that part of the interview that I consider it to be a casual exercise, and have will offer to pair program with candidates who freeze up.
I honestly don't care if they know the answer to some leetcode problem (also something I state in the preamble), I want to see how they break down a problem, and if they get stumped, how they work with other people on it.
In the example I gave before, I had to explain to the candidate what the code they copy / pasted from the internet did. That might be fine for a junior candidate, but for someone with "10 years" of experience with the language, it's a non-starter for a senior programmer.
I would hire that person based on the reasons I gave before. Not being able to perform to your standards in the space you allowed doesn't mean that person is a fraud or that the person couldn't do the job. You were biased by your own test. You thought you were measuring how good a programmer someone was when in reality you were testing how good your test was.
This is more subtle and profound that it might seem.
I work for a Big Tech and do tons of interviews, and not even us we have the data about what works and what doesn't (and if that data does exist, it's a very closely guarded secret). Even if we had hard data, we would still be blindsided by our inherent biases i.e. we know how candidates we accepted did, but not how candidates we rejected could have done.
I would even argue that knowing just how well the accepted candidates did is also extremely difficult. Some assesment might be possible based on some simple metrics, but the ultimate 'how much value does this person add' is a hard one. I wouldn't always trust performance reviews and the like for someone's added value, since that adds another point for added bias.
the data doesn't exist because it can't. there are no independent variables. even a team of 1 relies on good management, getting assigned good work that is well specified, and thousands of other variables. maybe a super expensive unsupervised model could figure out how well one person with a ton of data on their working would work on a specific team that also has produced a lot of data on how they work, but that doesn't seem to happen so it's likely not more cost effective than letting people decide.
I have seen human recruiters and companies blatantly lie about what they want, and what they are willing to offer as well.
Bad data problem doesn't go away because there's a human doing the matchmaking manually. If anything, it's adversarial against job seekers if the process isn't done algorithmicly.
Author here. I replied to a similar comment, but I'll reply to this one too because it's a really good point, and I think I should have done a better job of calling out in the post that humans suck at hiring as well.
Most recruiters are terrible, and they're kinda set up to fail because the data isn't there in exactly the same way. The difference is that good human recruiters can make some meaningful warm intros, let 2 engineers get in a room together to see if there's chemistry, and get the hell out of the way.
Similarly, I have so many terrible experiences at 'job fairs' for engineering students, where they only sent HR. Things like asking what stack they use, and getting answers in the form of 'Diverse an challenging technology is at the heart of our innovative agenda ...'. Even when I _know_ the insider details through friends, HR always does such an appalling job of selling it to you, even when they're actually good.
Humans being bad at social decision X is never an argument against humans making those social decisions. This is because of our moral status: we are responsible for our own social systems. We cannot morally evade or nullify that responsibility by asking a Magic Eight Ball to make decisions for us.
Science Fiction writers going back to Asimov have already imagined such dystopias as when machines imprison or enslave humans because humans cannot be trusted to know what is “best” for their own lives. And of course that is how despots like Putin justify themselves.
Maybe I am biased or suboptimal in my hiring, but I own my business and I will live or die by my OWN decisions, thank you. Does any CEO feel different?
AI doesn’t have the data. Neither do humans. So why are humans better again? I see it answered in another part. Humans are better at deciding who they like working with. Which is fair enough. Although AI might help remove some biases that creates.
By the way this is a very hard problem because it is like solving “war” in a sense. People need money to not die. They are very motivated to get a job. Therefore any system anyone comes up with gets gamed. There are 2 types of skill: ones you use in jobs and ones you use in interviews and there is very little overlap. It will be hard to ungame something where people need a job.
I would even question how good humans are at deciding who they like working with better.
Even more so than war it is like dating and that is exactly why I question the above. Humans can't even tell better than random who is a good marriage candidate with nearly unlimited time to make the decision. There is a game theoretic poker match going on but we pretend as if both sides don't have cards they hold close to the vest and that there isn't this huge random element with how the cards come off the deck.
The flush draw hits and the hiring manager pretends that it was some kind of fortune telling skill they have.
It is really a great example of where we think of ourselves as this highly evolved society from the Enlightenment but using a process that is practically no different than something the medieval church would come up with.
Totally agree with this. I was tasked with getting some insights from hiring data once and everyone was disappointed we just couldn’t get anything meaningful. There were echos of something. But no signal could rise above the noise into the realm of statistical significance. Seems like we just don’t know how to measure what matters.
> everyone was disappointed we just couldn’t get anything meaningful.
Nice .. my cynical side thinks that most of the time this is the expected outcome, in which case it wouldn't lead to disappointment, but just the next step of finding something that could plausibly be meaningful
Author here. 100%. Most recruiters are terrible, and they're kinda set up to fail because the data isn't there in exactly the same way.
The difference is that good human recruiters can make some meaningful warm intros, let 2 engineers get in a room together to see if there's chemistry, and get the hell out of the way.
I should make this clearer in the post. "Recruiters can't do hiring... and neither can AI. Both can't because the data's not there."
As it stands now, modern recruiting is roughly the same as an AI recruiter.
Scrape 10,000 emails/basic biographics and email blast everyone with an exciting opportunity at an {X} company doing {Y} thing that raised {Z} money. Insert unlimited leave, work life balance, and some other thing no one cares about but the company.
If you wanted ideal data, you'd want an employer to hire large numbers of people for a single role, hire them as randomly as possible, and be willing to train them. You could then see what sort of candidate was successful there.
There are a handful of militaries that effectively do that, but it's pretty much impossible for a normal employer.
When I was at Google, I remember hearing that this experiment was done on interns once. It wasn’t completely random, but they let in people who failed the standard Google interview to validate the effectiveness of that interview. Since interns were hired for a limited time, there was a lower cost to a false positive hire compared to an FTE.
Which militaries? I'm just genuinely curious. To my knowledge, most militaries in the world use IQ tests to determine placement qualification which is more or less completely antithetical to this concept. It'd be interesting to compare and contrast how it works out.
High IQ is supposed to correlate with things like good reaction time and achievement in certain sports as well so it's easy to imagine good fighter pilots might have higher than average IQ. Makes sense as IQ is a good proxy for how quickly your brain functions in general.
That's why you need a Bachelor's degree to be eligible for training, at least in the US Air Force. Edit: They need people who have proven they can study and understand material.
AI can't do hiring because it knows the best hire is the one you don't do: why bother with a pesky human when it can do the job itself cheaper/faster/soonish better.
Aside: having LinkedIn is a red flag in my book. I understand that one's desperation could lead to try getting a job using that cesspool of anti-patterns glued together with pure spite for the user, nevertheless, a red flag. Or to put it otherwise, if having a LinkedIn is a requirement for you to get hired, your job will disappear in 5 to 7 years (due to AI or other corporate movements).
I hate that "service" as much as you do, but people use it for all sorts of reasons. So if you're going to use it as a blocking filter, well, I guess that's the kind of luxury you can treat yourself to.
Yes, sorry Mr. Dean, no job for you here. Even if he told me he wanted to work for me, I would just call an ambulance, for he must have some kind of stroke.
As I typed the initial rant I opened an incognito tab and searched "Andrej Karpathy LinkedIn", had the same curiosity as you, if the top stars use that "service". He has an account. I clicked the link, LinkedIn took me through a quick security check/Verification process through which I had to select which bull was standing in the "right position", then it opened Mr. Karpathy's LinkedIn page and I saw as his last activity/posted article that he hires for Tesla AI, which is weird because I knew he quit, I clicked the article to read more, and LinkedIn opened a modal to Log In. Closed the page, and promised myself I will never again bait myself to open LinkedIn (probably I will break the promise in a year or two, one must check LinkedIn to see the state of the tart† in what not to do in user experience).
† wanted to use "f-art" but ChatGPT recommended "t-art" as wordplay: 'This phrase combines the concept of a "tart," which can refer to a promiscuous or morally questionable person, with the original phrase. It adds a touch of humor while implying a negative connotation. However, please note that wordplay involving potentially derogatory terms or stereotypes should be used with caution and sensitivity.'
After you do the verification if you press the back button (or access another LinkedIn URL) it asks you to do a check again, even if you did it 0.5 seconds ago. Once you verify you are not a bot, why put a log in modal on a 2 year-old article? Why put log in walls for any article whatsoever. Just plain anti-user anti-patterns.
And seriously, given the current state of image classifiers, if anyone at LinkedIn imagines selecting a bull in the right position stops any kind of bots whatsoever, they are ludicrously delusional. Anti bot/crawling measure may stop 1% of the bots, but annoy 100% of the users. They just have 100% disrespect of the users' time.
The type of ML algorithm that would do "automated hiring" has nothing to do with LLM and the current AI mania.
Classification algorithms work well in objective contexts where the data contains fundamental and stable associations between attributes and the property one wants to predict.
When applying this stuff to human affairs, whether that is credit scoring or dating apps or employee scoring you must be comfortable with very poor and unstable performance, extreme biases, and gaming behavior.
The alternative is ofcourse the HR department so pick your poisson.
"AI" (LLM) hype is so great, people are faulting it for not solving issues they overlook. Which means the solution isn't in the data that it's trained on.
Most people don't want to hire an employee that is competent. That alone is the truth. They value other non-technical factors highly, even when those factors might negatively affect performance.
As I learnt from a senior developer, the best developers are the people who have horrible impostor syndrome and constantly downplay their own abilities. These guys usually turn out to be geniuses, no matter where they've been (or haven't been) employed before.
Plus, when selecting for teamwork, instead of following the usual dogma, it's always better to hire someone that's been though shit rather than hiring someone who's always been cushy. People that have been through the ringer know how to cooperate well and even better, know why such a thing is necessary in the first place. The other kind of people usually are horrible backstabbers. Yes, they've had extensive corporate experience, that's usually not a good sign.
But it will. Because it doesn't matter if it's fundamentally better than a good recruiter if it's orders of magnitude cheaper. If you can have it pursue far more leads, maybe the outcomes are going to be the same or better. And if you used it to replace a bad or a mediocre recruiter? In any case, you might not care: hiring is a crapshoot anyway, and AI is saving you millions of dollars.
You want to weed out people who are clearly unqualified, but that's not rocket science. Beyond that, every company has a different hiring bar, a different process... and approximately zero data that their approach works better than anybody else's. Interview performance is a poor predictor of job performance. Whether the bar is high or comparatively relaxed, around 70% of the people you hire will be good, and the rest will underperform, leave after a couple of months, have difficult personalities, and so forth.
This is the only "pro-AI" comment that I've seen that makes sense. This isn't too different from the non-CS interviewing strategy. Try to make interviewing cheap and accept that it is a noisy (and biased) process. This also makes it easier to let bad candidates ("false hires") cheap replace.
As I see it, you're trying to optimize: p(X|F,C) > T, F=filter, and C=cost, and T=threshold. Treating this as a probabilistic problem seems important. So reducing C is valuable, even if F is not as good.
"in Europe" doesn't mean anything because every European country has different employment law. In the UK it's trivially easy to fire someone quickly if they're a recent hire.
Venture backed CEO here. I'm a "her". Also, please see the first footnote in the post.
"If I were you, I’d be justifiably skeptical at this point. Wow, a recruiter is writing about how AI can’t do recruiting. Classic Luddite trope, right? Let’s burn down the shoe factory because it can never compete with the artisanal shoes we make in our homes. In this case, though, you’d be wrong. I walked away from a very lucrative recruiting agency that I built to start interviewing.io, precisely because I wanted to be the guy who owned the shoe factory, not the guy setting it on fire out of spite. Recruiting needs to change, it needs disintermediation, and it needs more data. It’s the only way hiring will ever become efficient and fair. I just don’t think AI is going to be that change. If I’m wrong, I’ll be the first in line to pivot interviewing.io to an AI-first solution."
It doesn't lack data, it lacks cultural. Nuances change rapidly AI would have to have data from these nuances and so AI is more fit as an assistant. You'd be surprised how we have to adapt to cultural changes when hiring. Things last year are no longer acceptable today.
"The future of technical assessments, in the age of generative AI is a noble and difficult topic, though, and it’s something we’ll tackle in a future post."
This is exactly what we have built (are now applying to YC as well).
Can share more details soon (we are integrating some much needed love on our site), but we are:
- FAANG + Startup + Fintech founders
- super technical and our expertise is in Product Engineering + ML-for-products
- keeping everything 3.5-turbo based free, and probably will have to charge for 4
Our early users (Google/Meta SWEs and a few small AI startups) are pretty shocked with the results they are seeing, which they consider to be "as good if not better than human interviews + feedback".
Largest problem with data is that it can be falsified. So if you look at jobs like on Glassdoor they are well under the going rate. This is what is used to supress the job market.
fortunately the employers are now tracking like everything we click on and every time we move a mouse, every key we type, every second we are logged on... also every single IM/teams chat we send and receive, every phone call we make...every email we send, every sentence we typed out and then deleted because it sounded wrong,... so... they will have the data pretty soon.
There's always a lot of talk about meritocracy, but I want to convince you that this is an unachievable thing. We often hear "it doesn't matter your university's prestige/your credentials, just the quality of your work." As if the two don't strongly correlate.
But the question is why they correlate. Social networks matter, a lot. If you're at a top 10 university your university's career fair is going to be filled with top companies. This isn't true for less prestigious universities. If you're a grad student, you also know that the connections your advisor has strongly correlates with your ability to get a job (nepotism playing a significant role here).
We also see everyone using LeetCode to filter candidates but there's so much evidence that this is just a noisy filter. Not only do people cheat and succeed[0] but just consider how it is a meme that this doesn't correlate with the actual job. Our whole community has the position "study for the test, then forget it". Honestly, this feels insane to continue doing this, and to even pump more money into lifting this system up. We also know that grades are noisy[1] and does anyone remember those brain teasers that google used to do (also [1])? The ones that people still use?
So how do we hire? Well, to do that we need to look at what the job actually requires. But the job will change. We also need to know how well a candidate will work with the team. These are really difficult questions to answer and it would be insane to think that a resume or in person interview could be a non-noisy process (or at least where noise doesn't dominate). Nepotism has succeeded because it is a decent filter. It has a lot of false rejects but it works because you are outsourcing a lot of questions to someone else that has intimate knowledge. It is easy to determine this, logically. You have three candidates who all perform equally well on their resumes, interviews, and whatever. You only have this information for candidate A. Candidate B was recommended by a close friend who you trust. Candidate C also knows a close friend but that friend dislikes them. Who are you going to hire? Why? The nepotism is actually giving you more information. This is not an argument for nepotism but rather a illustration about how the blind interview process is highly noisy and that there is still a lot of missing information.
CS seems to be very weird in a lot of these respects. In a standard engineering job (e.g. Aerospace, electrical, mechanical) you generally send in your resume, talk with a few people (2-3 interviews) where a few technical questions will be asked, and that's about it. Resume -> phone screen (~30 minutes) -> In person interview (30min - 90 min). No whiteboard problems/puzzles. No take home tests/projects. In person interviews have several engineers on the team that is hiring and they ask behavioral questions as well as a few relevant technical problems. At most you'd have to do a back of the napkin calculation. Why do these firms do it this way? They've recognized that the system is noisy and that essentially you apply a few filters and then just hire the person. In 3-6 months if they don't work out, you let them go. If the hiring process is easy/cheap then you can also turn over bad hires easily. It also isn't uncommon to get a call 3-6 months down the line from your final interview. So you don't always have to repeat the whole process because there are still often available candidates. I've met plenty of people who work at FAANG jobs and brag about how they work 20hrs a week making $150k/yr. I've never met an engineer like that. Maybe that means something, maybe it doesn't.
>CS seems to be very weird in a lot of these respects. In a standard engineering job (e.g. Aerospace, electrical, mechanical) you generally send in your resume, talk with a few people (2-3 interviews) where a few technical questions will be asked, and that's about it. Resume -> phone screen (~30 minutes) -> In person interview (30min - 90 min). No whiteboard problems/puzzles. No take home tests/projects.
One distinction is that traditional engineering jobs are usually advertised as requiring a Bachelors of Engineering degree from an accredited institution. In the software development world I don't think there is such a hard requirement on the types of qualifications the applicant needs.
It's not impossible but it would be relatively hard to fake your way through a 4 year Engineering degree.
Part of the requirements to get my Engineering degree was that I had to complete 12 weeks (60 days) of industrial experience (this was done through internship over the summer break between 3rd and 4th year of the degree) - I don't think software world has the same requirements for practical hands on experience to get certified etc. in the way traditional engineering does.
In my final year I also had to submit an independent 70 page undergraduate thesis and pass an oral defense of the thesis given by 4 professors which was way more intense than any job interview I've had since. It would be very hard to BS your way past the professors without a solid technical understanding.
Essentially I think that in Engineering obtaining the undergraduate degree is the technical screening.
That definitely isn't the requirements for most engineering students and it sounds like an honors program (I've helped undergrads do their thesis). But I'll also tell you that not only does every CS student with a 4 year degree still do all the crazy interviewing, but so do Masters, PhDs (far surpassing your requirements), and people with a decade of job experience (even from a FAANG company). The whiteboard, leetcode, and takehome work never goes away and applies universally. Your argument would make more sense if this was a filter for non-degreed individuals or just green candidates. But it isn't.
> One distinction is that traditional engineering jobs are usually advertised as requiring a Bachelors of Engineering degree from an accredited institution. In the software development world I don't think there is such a hard requirement on the types of qualifications the applicant needs.
Even software engineers / software developers who have accredited engineering degrees are subjected to the same excessive interview process as those that received their training at a boot camp or are self taught.
A second major problem with data is that it's often hard to know why something didn't work out. Yeah, not everybody's a good fit for a role. But where would you get an objective source of truth about an individual's performance. Or ... transpose it to dating: would you trust a group of ex-girlfriends' or ex-boyfriends' to give an accurate assessment of someone as a partner? Might they have incentives to distort the record? Where does good data come from?
So it's not just that AI lacks the data. It's that there are structural problems with ever gathering the data. It's not like the data's out there and we just don't have access. If it even exists, it's poisoned at the source.