There was also related discussion about another longform piece by the same author that I'm too lazy to look up at the moment..
In my opinion, this author has drunken the kool-aid and then some. There is simply no evidence that more scaling of LLMs will lead to AGI, and on the contrary there is plenty of evidence that the current "gaps" that LLMs have are innate and unsolvable with just more scaling.