I don't know about that. I think ChatGPT's output is very repetitive and fluffy, and their style thus becomes similar—not necessarily a positive habit.
That’s the kind of writing you are supposed to have learned in grade school. Not good writing, just clear enough to pass.
Good writing only comes from writing lots of things that other people actually want to read. Course work is poorly structured for that, it isn’t peer review.
My time on Everything2 saw a dramatic improvement between the writing style that I started with and what it had progressed to when I became less active/left. In that time I wrote nearly 800 nodes over the course of 4 years.
Having a diary is one thing - having creative writing that is in the open and subject to criticism and encouraging refinement is quite another.
I think the even more "high school essay" trait is it's tendency to infodump its marginally relevant general knowledge of the subject matter. Look at all the stuff I've learned, isn't it impressive!
And whilst it's bad at being nuanced, it's even worse at being opinionated because its guide rails and human testers alike love its answers to be qualified with generic caveats like "depends on the specific situation"...
A ChatGPT answer looks exactly like a politician that doesn't want to tell you the answer to your question.
It also has a "high schooler" style, but mostly because of the rigid form it uses. If actual high schoolers throw random content in their essays like ChatGPT does, they will get negative points for that.
Yeah. Ask it a question and it will tell you everything about it but never the exact answer you are looking for. It takes prompts like be concise and direct, or only give one word, or the like to make it anything better. Sometimes first asking it to give a small answer and then explain step by step works well though.
And that writing will be used to train future models until a stiff, logical proof/Rube-Goldberg style of writing becomes the standard everyone adopts (hopefully not).
With logical proof styles, that requires active thought, reasoning and rationalizing, amidst other things. It's easy to blag things with fluff and repetition, which is why so many people are hired out of their depth into inappropriate positions.
Average ChatGPT output for the most part reminds me of the average British politician who can waffle on until you stop them, talking about all the aspects of something, but never really answering your question with all that much substance and depth, sometimes not even answering it whatsoever. Very "30,000 foot view".