> At this point every remote internet checklist has to include checks for humanity,
I genuinely don't understand this requirement. Isn't an interview exactly that? It's a conversation pretending to be about a technical problem/question/challenge but in reality its purpose is to find out whether you click with the person and would want to work with them. If some ChatGPT text can trick you then your process is broken anyway and everybody joining your company can expect colleagues selected by this sub-par process.
> If some ChatGPT text can trick you then your process is broken anyway
This is pretty unfair and seems like victim-blaming when we have companies spending billions of dollars to create these programs with the specific intent of trying to pass the Turing test.
There’s a bit of an echo chamber on HN where people convince each other that all LLM-generated text is easy to identify, riddled with errors, and “obviously” inferior to all real-human writing. Because some LLM writing fits those criteria and is easily identified, these folks are convinced they can identify all LLM writing and anyone who can’t must be a dunce.
I didn't claim anything about identifying writing. That's a strawman. I'm talking about humans talking to each other. Even if it's in a zoom call. Any interview process that doesn't include that is broken, and that's my claim. Echo chamber or not.
Apologies for misunderstanding you, then. Agreed that human to human is critical, especially for identifying culture fit (not homogeneity of course, just interaction styles like openness, etc).
I do think people cheat video interviews with LLM help, but in-person should always be required anyway, even if it’s via proxy (“meet with a colleague from our Madrid office”).
How widespread is LLM cheating during video interviews these days? Honest question.. How do people even do it? Let an LLM app listen in and suggest avenues of discussion and lists a bunch of facts on the side to spice things up?
Even if that's the case, isn't it just a matter of conversing in a way that the LLM can't easily follow?
An interviewer is a "victim"? Maybe they should just, you know, speak to their interviewees. At least in 2024 that's hardly faked by an LLM. Therefore, if you are fooled, you cheaped out, and you are hardly a victim.
I genuinely don't understand this requirement. Isn't an interview exactly that? It's a conversation pretending to be about a technical problem/question/challenge but in reality its purpose is to find out whether you click with the person and would want to work with them. If some ChatGPT text can trick you then your process is broken anyway and everybody joining your company can expect colleagues selected by this sub-par process.