I don't think "assessing talent" during interviewing is a skill you can learn. The skill you're learning is assessing how well the candidate performed during the interviewing process. The interviewing process itself is what is assessing the talent.
Put another way, I would rather have a novice interviewer use a good process than have an experienced interviewer use a bad or mediocre process.
In my experience, FAANG-style whiteboard interviews are a bad process. They don't work for assessing developers, and the skill of the interviewer can't compensate for that. The only thing I have found that works consistently is some form of work-sample interview, where the candidate produces working code in their preferred development environment.
"I would rather have a novice interviewer use a good process than have an experienced interviewer use a bad or mediocre process."
This provokes me to ask, is that also true of software development? Would we rather have a novice programmer using a good process than an experienced programmer using a bad or mediocre process?
My first thought is that the experienced programmer produces good results despite the bad process because they write a lot of tests, and so forth, but that suggest that they are using their own good process inside a bad process.
An experienced person using a bad or mediocre software development process really ought to be understood to be eschewing source code control and tests. They are patching production directly. They aren't putting thought into their names or software organization.
And yet they get some good results. But given this false dichotomy of novice with good process versus experienced with bad process... Which do we really prefer, and why?
I would say each developer also has their own process, which may be more or less or different than the organizational process. I find that good devs have a good personal process.
I think being an expert sometimes means you know which rules should be followed, and which rules can be bent for certain circumstances, possibly even producing a better result. Novice programmers may not know all the consequences of their actions, so it is better for them to follow a strong rigorous process.
Good results is also subjective. Does good results mean fast? Or that you can look at the code while not having to clean the vomit off your keyboard? Bug count? Getting it right the first time? Good UX?
Knowing what you're going for and what counts is where the process should be pointing, but sometimes they are just rules for rules sake.
A lot of programming has low stakes and immediate feedback. If a "bad" process means you have to fix a bug and recompile it's no biggie. And since there's rapid feedback, a "bad" programmer will usually be doing stuff that actually works.
Exceptions would be things like security, backups, major data / source code loss where things don't really go wrong until they go spectacularly wrong.
The real skill is deciding what you need to interview for in the first place. Once you know roughly what you need, the next skill is learning how to assess applicants in an impartial manner. Once you know that, the interview is kind of a formality.
After more than 500 interviews and having hired about 50 engineers, I have to wholeheartedly disagree. I've A/B tested my interview script and technique over all those years and there's a massive inverse correlation between how hard the answer to a question is to grade and how good of a predictor they are. Deep and open questions are the best ones, period, and learning to interpret the answers to them takes years. Interviewing is really a skill to be learned and it's a hyperdimensional optimization.
One of my currently best engineers started out with one of the worst coding challenges I've ever received (think JSON marshalling by string concatenation), but he was so smart, empathetic and driven I took the time and energy to build him up. One of the best challenges was by an asshole that was so intelligent he constantly managed to sabotage himself as well as the team.
TL;DR: These days I don't believe anybody anymore who proposes there's just a single skill to be checked in interviewing. Tech Interviewing is just a hard job, just like engineering itself, but it's equally fulfilling if you take the time to learn it and get good.
I'm not sure exactly what your disagreement is. I agree interviewing shouldn't test a single skill. I also agree you can get better at interviewing. But I still think the interviewing process you use is more important than the skill of the interviewer.
I have done a similar number of interviews as you, and I've found very little correlation between performance on FAANG style interviews and performance on the job.
I see your answer as boiling down to "That's a false dichotomy. We need experienced interviewers working with a good interview process in the form of deep and open questions that require experience to grade correctly."
Put another way, I would rather have a novice interviewer use a good process than have an experienced interviewer use a bad or mediocre process.
In my experience, FAANG-style whiteboard interviews are a bad process. They don't work for assessing developers, and the skill of the interviewer can't compensate for that. The only thing I have found that works consistently is some form of work-sample interview, where the candidate produces working code in their preferred development environment.