The argument of Stochastic Parrot is not that we shouldn't do it. It's just "Do not attribute any meaning to it".
The one in this article is in the same vane that author thought the AI was learning to program when in reality it was just repeating the most statistically probable combination of the code it had seen. That is, "correctness" is not part of "considerations" that the model does. If the majority of the code that it has scanned contains a particular form of logic bug, it will suggest the same logic bug. The trap is in the fact that the AI will write perfect syntax because that is it's bread and butter and people seeing this perfect syntax attribute also perfect logic to it.
As long as people are aware of this kind of problem, LLMs are a very useful tool that will save a lot of time. But if applied blindly "Because AI knows best" it will create more problems down the road.