Hacker News new | past | comments | ask | show | jobs | submit login

I dunno - people operate with their own internal models and make fairly regular and distinctive mistake patterns. Until you can fine tune a model with your personal undo histories on prior work (which since I can articulate a solution it can/will be done) it’ll be obvious. But there’s also a crypto point (not to be confused with obfuscation or currencies) I am making is that as LLMs are calculators for language, maybe the goal needs to be to up level the objective. It’s no longer about distinguishing between people with natural facility with written language any more than in the age of calculators is it so damn relevant to be a human computer. Instead novelty of thought and argument is perhaps the more crucial skill, and simply “write an essay about XYZ” is an in class exercise. Am at home exercise should be one that even with a LLM assisting requires primary thought and guidance sufficient to make a body of work that’s an order of magnitude better than if students had to labor against their dyslexia to be understood or bask in their ableism. Maybe the LLM levels the field for language facility and we instead focus on the thoughts of the human, exploring their facility of reasoning, and just accepting LLMs exist.

Frankly as a teacher I might find it more interesting to read the specific deepest thoughts of my students rather than wading through their struggles in basic articulation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: