LLMs aren't magic, the knowledge of how to design a humanoid robot that can assemble complex things isn't embodied in the dataset it was trained on, it cannot probe the rules of reality, it can't do research or engineering, this knowledge can't just spontaneously emerge by increasing the parameter size.