LLMs are trained on content from places like Stack Overflow, reddit, and github code,
and they generate tokens calculated as a sort of aggregate statistically likely mediocre code.
Of course the result is going be uninspired and impractical.
Writing good code takes more than copy-pasting the same thing everyone else is doing.