Hacker News new | past | comments | ask | show | jobs | submit login

I think pg underestimates the amount of trivial and mundane copy most people write day in day out. Be it stupid Jira tickets, spec documents, school essays with standardized expectations or formulaic messages and emails. Not much ability to lose there, but AI is a tremendous timesaver.

A pg quality essay is a whole other story but I doubt AI is much useful there in its current state.




I don't know about other people, but it helps my thinking process to write things in stupid Jira tickets, spec documents, etc. also. It forces me to combat edge cases that I didn't think about until I had to spell them out. Not always, of course, but perhaps often enough that it's worth it to know how to.

Even formulaic messages and emails can be useful, because when what I want to write doesn't plug in neatly into the formula that sometimes means something.

----

As a very concrete, though small, example: I have found that almost always when I use the word "empower" in any of its shapes I'm making a mistake (either taking a lazy shortcut in writing, or an actual cognitive mistake), yet I only notice it as I'm putting the symbols for it down.


Same deal with those warning words - mine are "enable" and "leverage." Smells that I'm handwaving away something I should be specific about, which means I don't understand it as well as I should.


I'm sorry but most ChatGPT output is functionally indistinct from a pg "quality" essay: Make banal observation about recent technological development. Explain it's "disruptive potential" for legacy industry. Express mild concern about possible social impact on group. Conclude that such concerns are overblown and that tech workers / investors can make shit loads of money while making the world a better, happier place for their chums. Be sure to only use a 9th grade vocabulary. If I had to ask why pg is concerned I'd say it's because ChatGPT threatens to disrupt the thought leader-industrial complex that he sits at the heart of. When anyone can generate his type of vacuous prose what point pg?


Not to defend pg specifically, but do you think your insulting him here has more value than what people get from his "vacuous prose"? I think ChatGPT could output something like your comment, too.


> I think pg underestimates the amount of trivial and mundane copy most people write day in day out.

Can attest. I had to write a bunch of definitions and do some preliminary research though nothing very technical. I asked chatGPT, it listed a bunch of stuff, I looked around the net, found some more hints, asked again; then I had the model to collect everything for me into a neat little markdown doc, and shared that.

Saved me a couple of hours of searching and writing useless / boring stuff.


> The brain images after the process of writing showed reduced activity in the amygdala and area of the brain that is activated by fear and emotion. The same images showed increased activity in the prefrontal cortex of the brain, the area that regulates emotions, to keep evenness and mental balance. [1]

The mere act of writing, regardless of the subject, seems to exercise the rational part of the brain and calm us down a bit. (The effectiveness of cognitive behavioral therapy seems to support this idea, though the existence of Twitter does not...) I think we have to at least entertain the possibility that a post-reading/writing world could be a much less rational place.

[1] https://www.gettingsmart.com/2016/03/26/exercising-student-b...


Why should anyone bother to read this crap especially if it's autogenerated?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: