Hacker News new | past | comments | ask | show | jobs | submit login

> I'm coming around to the idea that in general the more bullshit something is the easier it is for AI to disrupt.

> in general




Sorry, but I don't see how this is a clarifying response.

Research, composition, and communication are a big chunk of what ChatGPT is good at assisting. I don't know enough to claim that it /is/ the general case, but it's very likely not a minority.

And if that's the case, and these activities are not /essentially/ bullshit, then I'm still wondering how AI disruption can be claimed to be a reliable signal of bullshit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: