Yann's explanation here is a pretty high level overview of his understanding of different thought modeling, it isn't really related to how we define intelligence at all and isn't a complete picture. The distinction drawn between System 1 & 2 as explained is more of a limitation in conditions given to the algorithm rather than ability of the algorithm itself (i.e. one could change parameters to allow for unlimited processing time)
Yann may touch in how we define intelligence elsewhere, I haven't deeply studied all of his work. Though I can say that OpenAI has taken to using relative economic value as their analog for comparing intelligence to humans. Personally that definition is pretty gross and offensive, I hope most people wouldn't agree that our intelligence can be directly tied to how much value we can produce in a given economic paradigm.
That may be, but I think today's tweet from Yann LeCun succinctly sums up the differences in capability between our wetware and LLMs
https://twitter.com/ylecun/status/1728867136049709208